report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
Lead contamination of drinking water is difficult and expensive to control. It seldom occurs naturally in source water supplies like rivers and lakes; therefore it cannot be treated at a centralized treatment facility. Rather, lead enters drinking water primarily from the corrosion of materials containing lead in the water distribution system and in household plumbing. These materials include lead service lines that connect a house to the water main, lead-based solder used in a house to join copper pipe, and brass faucets and other plumbing fixtures. The Safe Drinking Water Act is the key federal law protecting public water supplies from harmful contaminants. EPA’s 1991 Lead and Copper Rule, promulgated pursuant to the Act, requires water systems to protect consumers against exposure to elevated levels of lead in drinking water by chemically treating water to reduce its corrosiveness and by collecting water samples from consumer taps and testing them for evidence of lead corrosion. EPA considers lead to be elevated (known as the “action level”) when lead levels are higher than 15 parts per billion in over 10 percent of tap water samples taken. Because lead contamination generally occurs after water leaves the treatment plant, the Lead and Copper Rule requires testing for lead at consumers’ taps. If elevated lead levels are found and persist after treatment to minimize the water’s corrosiveness, the water system must annually replace 7 percent of the lead service lines that it owns. Implementation and enforcement of the Lead and Copper Rule in the District of Columbia is complicated because of the number and nature of the entities involved. The Washington Aqueduct, owned and operated by the U.S. Army Corps of Engineers, is responsible for water treatment (including corrosion control). WASA purchases water from the Washington Aqueduct and delivers it to District residents, and is responsible for monitoring tap water samples for lead. EPA Region III in Philadelphia has oversight and enforcement authority for the District’s public water systems. Similar to many of the other approximately 400 largest drinking water systems in the United States (i.e., serving populations greater than 100,000), WASA is also responsible for wastewater collection and transmission, including operation and maintenance of its wastewater treatment facility and sanitary sewer system. This water infrastructure in the District of Columbia, like in many older cities, is aging and will require substantial funding over the next several years for replacement or rehabilitation. A June 17, 2004, administrative order for compliance on consent between EPA and WASA required WASA to take a number of corrective actions that, by necessity, enhanced its coordination with EPA and the D.C. Department of Health. Among these actions were developing a plan to identify additional lead service lines, improving the selection of sampling locations and reporting of water testing results to EPA, developing a strategy to improve WASA’s public education efforts, and collaborating with the D.C. Department of Health to set priorities for replacing lead service lines. Most importantly, with the introduction of orthophosphate to the drinking water supply, WASA met, and has continued to meet, federal standards for lead in drinking water. WASA’s most recent report on lead levels in D.C. drinking water was delivered to EPA in January 2008. WASA reported that 90 percent of the samples had lead levels of 11 parts per billion (ppb) or less, which is below EPA’s lead action level of 15 ppb. This is the sixth monitoring period in a row that WASA has met the lead action level. To resolve its lead problem in the long-term, however, WASA decided that it needed to undertake a program to replace the public portions of all its customers’ lead service lines (roughly 35,000 lines) by 2016. WASA estimates that its program to replace all the District’s lead service lines will cost at least $400 million. Importantly, this figure reflects only the cost to replace the public portion of customers’ lead lines. Customers would need to finance replacement of their private portion of the lead lines (at a cost that could reach $2,500) on their own. Through the first quarter of fiscal year 2008, WASA has spent $105 million on the program, and expects to spend roughly another $300 million by 2016. Perhaps the most important complication facing WASA’s lead service line replacement program is that ownership of lead service lines in the District of Columbia is shared—WASA owns the portion from the water main to the property line, and homeowners own the portion from the property line to the home. Homeowners may pay to replace their portion of the lead service line at the same time as WASA replaces its portion, but are not required to do so. Figure 1 shows the configuration of a service line from the water main to a customer’s home. WASA established a program to encourage homeowners to replace their portion of lead service lines. This program included: a low-interest loan program for low-income residents, offered through a grants of up to $5,000 for low-income residents, offered by the District of Columbia Department of Housing and Community Development; and: a fixed-fee structure for line replacement of $100 per linear foot plus $500 to connect through the wall of the home, to make pricing easier for homeowners to understand. Despite these incentives, D.C. homeowners have been reluctant to replace the private side of the lead service line. Through the length of WASA’s lead service line replacement program, beginning in fiscal year 2003 and running through the first quarter of fiscal year 2008, of the 14,260 lead service lines replaced in public space, only 2,128 homeowners replaced the private portion of their lead service line. These totals are particularly troublesome given the lack of information about the benefits of partial lead service line replacement. Indeed, experts disagree about the effectiveness of removing only part of a lead service line. Studies that EPA cited in the Lead and Copper Rule suggest that long- term exposure to lead from drinking water decreases when a service line is only partially replaced. However, after partial replacement of a lead service line, exposure to lead in drinking water is likely to increase in the short term because cutting or moving the pipe can dislodge lead particles and disturb any protective coating on the inside of the pipe. Some experts believe that lead exposure can increase after partial service line replacement because of galvanic corrosion where the dissimilar metals of the old and new pipes meet. A study presented at the 2006 American Water Works Annual Conference summarizing the experience of partial lead service line replacement by the Greater Cincinnati Water Works found that partial replacements of lead lines resulted in much higher lead levels in the water for up to 1 month after replacement, even though the system was optimized for corrosion control. Even after this initial period, the sites with partial replacements had similar water lead concentrations as the sites in which the entire lead line was left in place—indicating there would be little, if any, benefit of partial lead line replacement. In the study, only completely replacing the lead service line resulted in both short- and long-term water quality improvements in all of the sites tested. The authors also noted that the use of a Teflon sleeve, or some other method of treating the portion of the line remaining in service, may help to protect water quality, but that more needs to be done in this area. Recognizing the need for more research, EPA has partnered with the American Water Works Association Research Foundation on a study of the relative contributions of service lines and plumbing fixtures to lead levels at the tap. The projected completion of the study is November 2008. In light of these problems, WASA is now considering whether its current lead line replacement program should be restructured, particularly given its high cost and the competing demands on its budget. As a water utility serving a large metropolitan area, the lead problem has posed only one of several major infrastructure challenges for the utility and its customers. For example, approximately one-third of the District (by acreage) is served by combined sewers, which carry both sanitary waste from homes and businesses and storm water drainage. During storms this untreated sewage is discharged directly into the Anacostia and Potomac Rivers, adversely impacting the quality of these waters. To meet federal water quality standards, WASA will need to spend considerable sums of money to deal with the problem. Specifically, a March 2005 consent decree between WASA and EPA requires WASA, by 2025, to implement WASA’s long-term control plan, including construction of large underground tunnels to temporarily store excess flows until they can be treated at the Blue Plains Wastewater Treatment Plant and other measures, to significantly reduce combined sever overflows into the Anacostia River and other area waterways. WASA has estimated the cost of this effort to reach $2.2 billion dollars. WASA’s challenges are mirrored across the country, where projected needs for investment in drinking water and wastewater infrastructure range from $485 billion to nearly $1.2 trillion over 20 years. The variation in these estimates reflects alternative assumptions about the nature of existing capital stock, replacement rates, and financing costs. EPA reported in its most recent Drinking Water Infrastructure Needs Survey (issued in June 2005) that drinking water utilities alone will need an estimated $276.8 billion for the 20-year period ending in December 2022. EPA’s new estimate exceeds those from prior surveys by more than 60 percent, largely as a result of an increased emphasis on capturing previously underreported needs for infrastructure rehabilitation and replacement. According to EPA’s report, current needs increased by about 50 percent, but future needs rose by over 100 percent. EPA attributes the difference to a more complete assessment of the longer-term needs for addressing “aging infrastructure that is currently adequate, but will require replacement or significant rehabilitation over the next 20 years.” Pipeline rehabilitation and replacement represents a significant portion of the projected infrastructure needs for water utilities. EPA estimates that underground pipelines account for about 75 percent of the nation’s existing capital investment in drinking water and wastewater infrastructure. According to the American Society of Civil Engineers, U.S. drinking water and wastewater utilities are responsible for an estimated 800,000 miles of water delivery pipelines and between 600,000 and 800,000 miles of sewer pipelines, respectively. However, several recent studies have raised concerns about the condition of the existing pipeline network. For example, in August 2002, based on a nationwide survey of large drinking water and wastewater utilities, we reported that more than one- third of the utilities had 20 percent or more of their pipelines nearing the end of their useful life. In the case of one in 10 utilities, 50 percent or more of the utility’s pipelines were nearing the end of their useful life. Citing a “huge wave of aging pipe infrastructure,” the American Water Works Association in 2001 predicted a significant increase in pipe breaks and repair costs over the next 30 years—even if utilities increase their investment in pipe infrastructure several fold. Other studies make similar predictions for the pipelines owned by wastewater utilities. Despite the looming problems facing utility pipelines, our nationwide survey found that pipeline rehabilitation and replacement was not occurring as desired, with over two-thirds of the utilities reporting that they have fallen short of their desired pace of rehabilitation and replacement. Specifically, we found that roughly half of the utilities actually rehabilitated or replaced one percent or less of their pipelines annually, even though an estimated 89 percent of drinking water utilities and 76 percent of wastewater utilities believed that a higher level of rehabilitation and replacement should be occurring. More generally, we found that many utilities had deferred maintenance, minor capital improvements, and/or major capital improvements due to insufficient funding. About one-third of the utilities deferred maintenance expenditures, and similar percentages of utilities deferred expenditures in the other categories. According to EPA’s June 2005 Drinking Water Infrastructure Needs Survey, the largest category of need is the installation and maintenance of transmission and distribution systems—accounting for $183.6 billion, or about 66 percent of the needs projected through 2022. For wastewater systems, EPA’s 2004 Clean Watersheds Needs Survey projected infrastructure-related needs for publicly-owned wastewater systems of $202.5 billion through 2024. Several factors have contributed to the nation’s deteriorating water infrastructure over the years. The adequacy of the available funding, in particular, has been a key determinant of how well utility infrastructure has been maintained. However, according to our nationwide survey, a significant percentage of the utilities serving populations of 10,000 or more—29 percent of the drinking water utilities and 41 percent of the wastewater utilities—were not generating enough revenue from user charges and other local sources to cover their full cost of service. In addition, when asked about the frequency of rate increases during the period from 1992 to 2001, more than half the utilities reported raising their rates infrequently: once, twice, or not at all over the 10-year period. Our survey also raised questions about whether utility managers have enough information about their capital assets to effectively plan their future investment needs. We found that many utilities either did not have plans for managing their assets, or had plans that may not be adequate in scope or content. Specifically, more than one-fourth of the utilities did not have plans for managing their existing capital assets. Moreover, for the utilities that did have such plans, the plans in many instances did not cover all assets or did not contain one or more key elements, such as an inventory of assets, assessment criteria, information on the assets’ condition, and the planned and actual expenditures to maintain the assets. Citing communities’ funding difficulties, many have looked to the federal government for financial assistance. However, if budgetary trends over the past few years serve as any indication, federal funding will not close the gap. The key federal programs supporting water infrastructure financing include the Clean Water State Revolving Fund (CWSRF) for wastewater facilities, and the Drinking Water State Revolving Fund (DWSRF) for drinking water facilities. Under each of these programs, the federal government provides seed money to states, which the states in turn use to support revolving funds that loan money to qualifying localities within their jurisdictions for new construction and upgrades. However, the trends and overall funding levels associated with these programs, suggest that they will only have a marginal impact in closing the long-term water infrastructure funding gap. Federal appropriations for the CWSRF in particular have decreased by nearly 50 percent during the past five years— from $1.34 billion enacted for fiscal year 2004 to $689 million enacted for fiscal year 2008. Funding for the DWSRF has remained virtually flat during the same period. Growing infrastructure needs, combined with local pressure to keep user rates low, make it imperative that utilities manage their resources as cost effectively as possible. While hardly a “silver bullet” for the water industry’s massive shortfall in infrastructure funding, comprehensive asset management is one approach that has shown promise in helping utilities better identify their needs, set priorities, and plan future investments. Basic elements of comprehensive asset management include: collecting and organizing detailed information on assets; analyzing data to set priorities and make better decisions about assets; integrating data and decision making across the organization; and linking the strategy for addressing infrastructure needs to service goals, operating budgets, and capital improvement plans. At its most basic level, asset management gives utility managers the information they need to make sound decisions about maintaining, rehabilitating, and replacing capital assets—and to make a sound case for rate increases and proposed projects to their customers and governing bodies. Our 2004 report identified a number of asset management practices that could help water utilities better manage their infrastructure and target their investments to achieve the maximum benefit. Among other things, we found that collecting, analyzing, and sharing data across the organization helped utilities make informed decisions about which assets to purchase, optimize their maintenance practices, and determine how long to repair an asset before replacement becomes more cost-effective. Some utility managers, for example, have used risk assessments to determine how critical certain assets (such as pipelines) are to their operations, considering both the likelihood and consequences of their failure. This systematic evaluation has helped them to target their resources accordingly, with the most critical assets receiving preventive maintenance while other, less critical assets received attention on an as needed basis. Having better information on utility assets has not only allowed managers to identify and prioritize investment needs, but has also helped them justify periodic rate increases to their customers and governing boards to pay for needed improvements. In one case, for example, utility managers modeled information on pipe performance history and replacement costs and predicted the approximate number of pipe breaks at various levels of funding. By understanding the trade-offs between lower rates and higher numbers of pipe breaks, the governing board was able to make an informed decision about the level of service that was appropriate for its community. Whether the problem is replacing lead service lines, as is the case for WASA, meeting new regulatory requirements, or paying the price for years of deferred maintenance, many utilities are facing huge investments to add new capital assets and replace others that are reaching the end of their useful life. Comprehensive asset management is one approach that shows real promise as a tool to help drinking water and wastewater utilities effectively target limited resources and, ultimately, ensure a sustainable water infrastructure for the future. Accordingly, our report recommended that the Environmental Protection Agency take steps to strengthen the agency’s existing initiatives on asset management and ensure that relevant information is accessible to those that need it. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of this Subcommittee may have at this time. For further information, please contact John B. Stephenson at (202) 512- 3841. Individuals making key contributions to this testimony included Elizabeth Beardsley, Ellen Crocker, Steve Elstein, Tim Minelli, Nathan Morris, Alison O’Neill, and Lisa Turner. This is a work of the U.S. government and is not subject to copyright protection in the United States. This published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The discovery in 2004 of lead contamination in the District of Columbia's drinking water resulted in an administrative order between the Environmental Protection Agency (EPA) and the District's Water and Sewer Authority (WASA), requiring WASA to take a number of corrective actions. WASA also took additional, longer-term measures, most notably a roughly $400 million program to replace what may be 35,000 lead service lines in public space within its service area. As in WASA's case, water utilities nationwide are under increasing pressure to make significant investments to upgrade aging and deteriorating infrastructures, improve security, serve a growing population, and meet new regulatory requirements. In this context, GAO's testimony presents observations on (1) WASA's efforts to address lead contamination in light of its other pressing water infrastructure needs, and (2) the extent to which WASA's challenges are indicative of those facing water utilities nationwide. To address these issues, GAO relied primarily on its 2005 and 2006 reports on lead contamination in drinking water, as well as other recent GAO reports examining the nation's water infrastructure needs and strategies to address these needs. With the introduction of orthophosphate to its drinking water WASA has consistently tested below the federal action level for lead. However, WASA is reevaluating its roughly $400 million, longer-term solution for replacement of what may be 35,000 lead service lines within its jurisdiction. In addition to the program's high cost, a key problem WASA faces is that, by law, it may only replace the portion of the service line that it owns; replacing the portion on private property is at the homeowner's discretion. Accordingly, WASA has been encouraging homeowners to participate in the program by replacing their own portion of the lead lines. Despite these efforts, however, homeowner replacement of lead service lines remains limited. Of the 14,260 lead service lines WASA replaced through the first quarter of fiscal year 2008, there were only 2,128 instances in which the homeowner participated in private side replacement. Many questions remain about the benefits of partial lead service line replacement. In fact, some research to date suggests that partial service line replacement results in (1) short-term spikes in lead levels immediately after partial replacement and (2) little long-term reduction in lead levels. WASA's dilemma over this program is taking place within the context of its other staggering infrastructure needs. Most notably, WASA is undertaking a $2.2 billion effort to meet the terms of a consent decree with EPA requiring the utility to control its sewer overflow problems. WASA's challenges in addressing its lead contamination problems and other infrastructure demands are mirrored across the country, where infrastructure needs are estimated to range from $485 billion to nearly $1.2 trillion nationwide over the next 20 years. In particular, many utilities have had difficulty in raising funds to repair, replace, or upgrade aging capital assets; comply with regulatory requirements; and expand capacity to meet increased demand. For example, based on a nationwide survey of several thousand drinking water and wastewater utilities, GAO reported in 2002 that 29 percent of the drinking water utilities and 41 percent of the wastewater utilities were not generating enough revenue from user rates and other local sources to cover their full cost of service. GAO also found that about one-third of the utilities (1) deferred maintenance because of insufficient funding, (2) had 20 percent or more of their pipelines nearing the end of their useful life, and (3) lacked basic plans for managing their capital assets. Other GAO work suggests that the nation's water utilities could more effectively manage their infrastructure at a time when huge investments are needed. In 2004, for example, GAO cited "comprehensive asset management" as one approach that could help utilities better identify and manage their infrastructure needs. While by no means a panacea to their fundamental fiscal challenges, water utilities can use comprehensive asset management to minimize the total cost of designing, acquiring, operating, maintaining, replacing, and disposing of capital assets over their useful lives, while achieving desired service levels. |
The KCP facility, NNSA’s primary site for producing or procuring nonnuclear components, is the first site within the nuclear weapons complex scheduled for significant modernization. KCP does not possess weapons-grade nuclear materials, but it supplies approximately 85 percent of the nonnuclear components that compose a typical nuclear weapon— ranging from simple items like nuts and bolts to more complex components, such as radars, arming and firing mechanisms, and critical nuclear safety devices meant to prevent accidental detonation. The facility has a footprint of nearly 3 million square feet and costs about $400 million per year to operate. Currently, about 127,000 square feet of this space is devoted to stored inventory, including production equipment, tooling, gauges, and testers. The production infrastructure of the nuclear weapons complex is aging and becoming increasingly outdated. A 2001 Department of Defense (DOD) review of the nation’s nuclear policy found that the nuclear weapons production infrastructure needed to be repaired and made more flexible so that it could adapt to the changing needs of the nuclear weapons stockpile. Subsequently, NNSA developed the strategic Complex Transformation Plan, which seeks to develop a smaller, more responsive production infrastructure—one that will ultimately support a smaller nuclear weapons stockpile—while continuing to maintain and refurbish the weapons currently in the stockpile. As part of its Complex Transformation Plan, in 2006 NNSA directed KCP to develop plans for modernizing its production facilities. In its plans, KCP identified three key avenues for achieving NNSA’s goals: Increasing outsourcing. KCP is increasing the percentage of nonnuclear components purchased from external suppliers from about 54 to 70 percent. At the same time, it is consolidating and reducing its external suppliers from about 412 in May 2008 to 320 in September 2009 to reduce the costs of working with and certifying multiple suppliers. As more components are acquired from external suppliers, KCP expects the equipment and infrastructure necessary for the production of many of those components to be eliminated, reducing the need for the large size and associated operating costs of the facility. KCP currently uses domestic-based suppliers, with the exception of those components that are acquired from Malaysia and Mexico. Nearly all of the components and processes that KCP outsources are unclassified; KCP officials told us that they have only certified one supplier that is approved for classified processing, production, and storage. Transforming its business processes. Honeywell, the contractor that manages KCP, is implementing more commercial-like business practices. In particular, KCP officials note that they have recently been granted relief from some DOE and NNSA oversight, such as NNSA nuclear security orders, because it does not possess weapons-grade nuclear material and because commercial standards are being used. KCP officials believe that these changes will lead to more streamlined business processes with lower administrative costs. According to KCP officials, an independent review estimated that these lower costs will amount to $37 million each year— more than one-third of the $100 million annual cost savings KCP projects will result from its modernization plans. Building a modern, more flexible facility. NNSA plans to have a new KCP facility built on an undeveloped site in Kansas City, Missouri. KCP’s new facility is designed to be smaller and more adaptable than the current facility, allowing quick and economical changes to the capability and capacity of the facility, such as using more open manufacturing space and modular utility systems so that it can be quickly and inexpensively reconfigured to adapt to changing production needs. NNSA committed to Congress that the new KCP facility would be operating by 2012 but now expects a delay of about 1 year. Financing the New Facility To construct this new facility, NNSA identified three financing options. The traditional approach for financing construction projects is to request funding from Congress using a budget line item in the President’s annual request for appropriations. If the requested funds are appropriated, a federal manager directly controls the scope, cost, and schedule of the design and construction of the facility. These projects usually require significant funding up front when the facility is being designed and constructed. However, as we have reported, large amounts of funding have become more difficult to obtain, and agencies are increasingly interested in financing alternatives that distribute costs over longer periods of time. One alternative is to acquire facilities using the General Services Administration’s (GSA) leasing authority, which allows GSA to lease space from a private developer on behalf of many government agencies. For a lease on privately owned land, the process culminates with a lease agreement of up to 20 years. Another alternative, according to KCP officials, allows NNSA to secure financing directly through private developers for the construction of facilities, but this alternative allows only for a maximum 5-year lease term. We have considered federal leasing, in general, to be a challenge for almost 20 years. In January 2003 we designated federal real property as a high-risk area, citing the government’s overreliance on costly, long-term leasing as one of the major reasons. Our work has also shown that building ownership often costs less than operating leases, especially for long-term space needs. Based on this work, we made recommendations in 2008 that agencies develop strategies to reduce reliance on leased space for long-term needs when ownership would be less costly. Another important consideration in KCP’s modernization plans is avoiding interruptions in the supply of components it produces. Such interruptions could negatively affect the nuclear weapons stockpile and weaken national security. For example, as nuclear weapons components age, they may need to be replaced to avoid undermining the reliability and performance of the weapon that they occupy. KCP produces or procures these replacement components. Much of KCP’s current workload supports the life extension program for the W76 warhead—carried on the Navy’s Trident II submarine-launched ballistic missile—which is a significant part of the U.S. nuclear weapons stockpile. Life extensions lengthen weapons’ operational life for an additional 20 to 30 years and allow NNSA to certify that the weapons continue to meet military performance requirements without underground nuclear testing. In addition, KCP currently produces or procures nonnuclear components needed to maintain the W88 submarine-launched ballistic missile warheads, the W78 and W87 intercontinental ballistic missile warheads, the W80 cruise missile warhead, and the B61 and B83 nuclear bombs. Another important consideration in KCP’s modernization plans, particularly as it begins to increase outsourcing, is that KCP manages components and technologies that might be attractive to terrorists or other potential adversaries. Passing components and technologies to external suppliers may put them at greater risk of being obtained and used by potential adversaries to develop or advance their own nuclear capabilities. Recently, the Department of Justice (DOJ) reported that on a daily basis, foreign states as well as criminal and terrorist groups seek arms, technology, and other material to advance their technological capacity, and the United States is a primary target because it produces advanced technologies. For fiscal year 2008, DOJ reported that more than 145 defendants faced criminal charges for attempting to illegally transfer these items and technologies, with roughly 43 percent of these defendants charged with attempting to transfer them to Iran or China. Some of the components KCP produces or procures, as well as technologies that can be developed from obtaining weapons-related design drawings and unique production processes, among other things, may be subject to laws and regulations controlling their export. Export control is primarily managed by the Departments of Commerce and State. The Department of Commerce, through the Export Administration Regulations, controls exports of most dual-use items and technologies. The Department of Commerce maintains the Commerce Control List, which describes the characteristics and capabilities of dual-use items that may require export licenses. The list is divided into 10 general categories of controlled technologies, such as sensors or electronics, which could include components that KCP produces or outsources. The Department of State, under the authority of the Arms Export Control Act and the International Traffic in Arms Regulations, controls exports of munitions items and technologies—those designed, developed, configured, adapted, or modified solely for military applications. These items are identified on the U.S. Munitions List, requiring most to be licensed for export. While these two departments are responsible for limiting the possibility of export-controlled items and technologies falling into the wrong hands, NNSA asserts that it is also generally responsible for the management and security of the nation’s nuclear weapons, including ensuring that nuclear weapons components or related information about these technologies are not used to advance the nuclear capabilities of potential adversaries. Many of the actions needed to successfully relocate the KCP facility require the ongoing cooperation of and collaboration with other NNSA laboratory sites. Design engineers at Lawrence Livermore, Los Alamos, and Sandia National Laboratories design the nonnuclear components produced or procured by KCP and determine whether the designs are classified, among other things. KCP produces or procures the components according to laboratory design specifications, but quality and production engineers from KCP continue to periodically collaborate with laboratory design engineers to oversee production, mitigate production risks, and integrate competing priorities, such as cost, schedule, design requirements, and quality. Before KCP begins full procurement or production of some components, laboratory design engineers must formulate a plan to qualify the component and assist in executing the qualification plan, which involves testing a sample of components to ensure that they meet quality, safety, and security standards. Such tests may include visual, environmental, mechanical, and electric tests, among others. In addition, if KCP decides to make major production changes or move the production process, components may have to be requalified to ensure that they still meet quality, safety, and security standards. Requalification can take from 1 month to more than 1 year, depending on the significance or complexity of the part and the extent of the planned production change. Requalification may also be required when a product is outsourced or moved from one supplier to another. As such, many components will have to be requalified before they can be produced at KCP’s new facility or by an external supplier. KCP is NNSA’s primary site within the nuclear weapons complex for producing nonnuclear components of nuclear warheads and bombs. Figure 1 illustrates how sites in the nuclear weapons complex interact with each other to design, produce, procure, and assemble nonnuclear components. KCP officials told us that they evaluated several locations and financing alternatives based on their potential for satisfying NNSA’s key goals outlined in its strategic plans for modernizing the overall infrastructure of the nuclear weapons complex. Based on analyses conducted by KCP, NNSA chose to lease a new facility for 20 years. However, KCP compared financing alternatives using cost estimates limited to 20 years rather than the full expected life of the proposed facility; therefore, NNSA cannot be certain whether other financing alternatives might have offered lower costs over the longer term. KCP evaluated several alternatives on behalf of NNSA for modernizing its facility based on each alternative’s potential to satisfy key goals outlined in NNSA’s strategic plans for modernizing the nuclear weapons complex, among other things. Specifically, KCP officials told us that they sought an option that (1) was consistent with NNSA’s goals for maintaining a smaller facility for producing nuclear weapons and that could quickly adapt to change; (2) met schedule commitments to Congress; (3) minimized costs of constructing and annually operating and maintaining the facility; and, (4) maximized chances of completing the relocation within the established scope, cost, and schedule. Although KCP conducted the analyses of alternatives for modernizing its facility under the direction of NNSA, NNSA ultimately made its final decisions on how best to proceed using these analyses. To determine how to proceed with the modernization of its facility, KCP officials stated that they considered (1) taking no action—essentially continuing operations at the current KCP location; (2) renovating adjacent GSA facilities; (3) purchasing or leasing other facilities that were already available in the Kansas City area; or (4) building a new facility on the existing KCP site, on other vacant land within Kansas City, or at another location. KCP officials explained that several of these options proved undesirable for a variety of reasons. Taking no action. KCP determined that taking no action did not align with NNSA’s overall Complex Transformation goals or commitments that NNSA’s Deputy Administrator made to Congress to modernize the facility. According to KCP officials, taking no action would also result in annual operating costs that are about $100 million higher than necessary beyond fiscal year 2013, over half of which would be related to facility maintenance. Renovating adjacent GSA facilities. KCP determined that renovating adjacent GSA facilities was feasible and the least costly alternative in terms of construction costs, but it posed several problems. For example, renovating the 70-year-old facility would require extensive modification of electrical, heating, and cooling systems, which would require moving, penetrating, or bypassing concrete walls, floors, and ceilings. KCP officials stated that this would be difficult and time-consuming, and the considerable expense would be of questionable worth for such an old facility. Also, the adjacent facility is located in an area susceptible to flooding. Further, KCP officials stated that this option presented schedule risks. For example, multiple tenants in the GSA facility would need to relocate, and any delays in their relocations could cascade to the renovation process. Purchasing or leasing other available facilities in the Kansas City area. KCP officials stated that they could not identify any other facilities in the region that were adequate for the KCP mission. In light of these constraints, NNSA officials determined that building a new facility was the best option. To identify a specific location for the new facility, KCP officials also told us that they considered nearby sites as well as sites outside of the Kansas City area. In particular, NNSA’s Office of Transformation asked Science Applications International Corporation—a support contactor—to prepare an independent assessment of moving the nonnuclear production facilities from KCP to another site in the nuclear weapons complex. In examining seven other active NNSA sites in the nuclear weapons complex, the study determined that Albuquerque, New Mexico, presented the highest potential for cost savings because Sandia—the primary design laboratory for nonnuclear components––has a location there. However, the study concluded that constructing the new facility in Albuquerque would not allow NNSA to recover the cost of moving the operation—reaching a “break even” point—by the end of the period considered in the study— about 20 years. The study also determined that relocating operations away from Kansas City would be expensive because staff would have to relocate, new staff would have to be trained, and critical expertise would be lost. Ultimately, based on KCP’s analysis, NNSA decided to build its new facility on an available site in Kansas City about 8 miles from the existing facility that is not on a flood plain. To determine how to finance the construction of the new facility, KCP analyzed options to determine which best met NNSA’s goals and presented acceptably low risks. According to KCP officials, these options included (1) using congressional line item capital project funding, which is DOE’s traditional approach; (2) using a DOE lease process, which secures financing directly through private developers for the construction of facilities, but allows for a maximum 5-year lease term; and (3) using GSA’s leasing process. According to KCP officials, GSA’s leasing process was the best available financing alternative because it was the only financing option that could meet the NNSA Administrator’s commitment to Congress to operate the facility by 2012. GSA’s leasing process also led to the lowest overall total cost over a 20-year period and eliminated the need for large up-front capital outlays that Office of Management and Budget officials said would likely not be available for modernizing KCP. According to KCP officials, a 20-year lease through GSA would cost less annually than a lease undertaken through DOE’s lease process, which allows only a maximum 5-year lease. The GSA lease process defines, among other things, (1) facility requirements; (2) how the facility should be built, such as security during construction; and (3) developer responsibilities for providing facility maintenance services over the life of the lease. For a lease on privately owned land, the process results in an operating lease agreement of up to 20 years—a legal and binding contract between GSA and a developer, with ownership remaining with the developer. KCP estimated that the GSA lease option would cost about $4.762 billion, which includes estimated annual operating costs, onetime relocation costs, capital equipment, and annual lease payments of about $43 million beginning in fiscal year 2011 and continuing through fiscal year 2030. In contrast, KCP’s total estimated cost for constructing the facility using congressional line item capital project funding is about $4.875 billion, which includes the same types of costs plus construction costs, but excludes annual lease payments because NNSA would own the facility. KCP determined that DOE third-party financing was not viable because the process was new and unproven and offered only a short-term lease of 5 years, which KCP officials believed would likely result in higher annual lease costs because potential developers would have difficulty obtaining financing for such a short-term lease at a reasonable cost. As a result, KCP did not develop a total estimated cost based on a 5-year lease using DOE’s process of obtaining third-party financing. Table 1 shows KCP’s comparison of the two KCP options it determined would pose the lowest risk to implement—the GSA lease and DOE line-item project—revealing that the GSA lease is less costly than line-item funding by over $100 million. However, KCP did not compare alternatives using the total costs over the expected life of the proposed KCP facility—the full life cycle costs; therefore, NNSA cannot be certain whether other alternatives might result in lower longer-term costs. KCP limited its analysis of future costs to 20 years after lease payments begin, consistent with the longest lease term allowed under the GSA option and the longest period for which NNSA was willing to commit to under its current KCP relocation approach. However, 20 years is far shorter than the useful life of a production facility that is properly maintained; the current KCP facility has operated for more than 60 years. In addition, although requirements may change in the future, current nuclear weapons production requirements justify the need for KCP manufacturing capabilities for at least another 32 years. Although leasing a facility for 20 years through GSA is less costly than leasing over shorter- term periods, leasing is usually more costly over the long term than constructing and owning a facility. NNSA and KCP officials acknowledged that while leasing a facility through GSA under a 20-year scenario is less costly than a line item project, it can be more costly over a longer-term scenario—possibly even beginning at about 22 years into the lease. In early 2009, DOE’s Office of Cost Analysis reviewed KCP’s relocation project. Although ultimately supporting NNSA’s decision to lease a new facility, DOE’s review found KCP’s cost analysis to be biased toward the leasing option. DOE’s review noted that while leasing is more affordable up front, it is more costly over time, particularly since the government tends to occupy facilities for long periods of time and must pay relocation costs when the lease terms expire or negotiate new leases and continue making lease payments. KCP’s cost analysis, being limited to 20 years, precludes including either of these costs, whereas a full life cycle cost analysis would have included both relocation and continuing lease costs. KCP officials stated that limiting their analyses to 20 years is appropriate and is consistent with NNSA’s overall approach for KCP’s transformation. As part of its goals to develop a more adaptable nuclear infrastructure, NNSA determined that 20 years is the longest period for which it would be willing to commit under the current KCP relocation approach. NNSA officials stated that it is conceivable that the nation’s entire nuclear stockpile, its nuclear strategy, or both could be obsolete by 2030 and a new strategy would apply. However, NNSA’s weapons program manager for the W76 and W88 told us that current nuclear weapons production requirements for these two warhead types justify the need for KCP manufacturing capabilities until at least 2042. He added that since some threat to the United States will always exist, a new project will likely replace the W76 and W88 if they are ever taken out of service, thus justifying the need for KCP’s manufacturing capabilities even beyond 2042. KCP is initiating several key actions to help ensure that components are produced without delay or interruption, such as producing components before the move to compensate for periods of time when production will be halted and coordinating with design laboratories that will help to requalify equipment after the move. However, KCP’s relocation schedule— which is critical to ensuring that the move does not disrupt production— did not initially adhere to all of GAO’s best practices for schedule development. While KCP officials have taken steps to address some of these problems, the schedule still has some shortcomings. In preparation for KCP’s 18-month move to its new facility, KCP officials have developed plans to ensure that it can continue to provide components for the nuclear weapons stockpile as scheduled. In 2007, KCP hired a professional moving company to develop a high-level strategy to minimize the duration, costs, and disruptions associated with the move. This strategy included major milestones and estimated time frames for moving each department within KCP. Based on these estimates, KCP has begun to produce components that it will store or deliver before and during the move to ensure that it can meet delivery requirements when production is halted to move, set up, and requalify equipment at the new facility. KCP officials conducted long-range planning to determine the demand for components through 2016, which helped officials estimate the number of components to produce in advance. As of June 2009, KCP officials stated that they are largely on schedule for producing these additional components. Moreover, KCP has established a formal program to capture and preserve information about certain production processes and ensure that production capabilities are not lost. While KCP does not plan to record information about all processes, officials developed more than 60 step-by- step videos, overview videos, and notes from subject matter experts. This knowledge preservation program focuses on processes that are difficult to develop or involve key personnel who are retiring or otherwise leaving through other forms of attrition, as well as processes that KCP uses infrequently or plans to outsource. These efforts are designed to allow KCP to transfer knowledge and resume internal production of outsourced components, if necessary. Our March 2009 report on NNSA’s stockpile life extension program illustrates the importance of maintaining such information. Specifically, at another NNSA site, we found that officials no longer knew how to manufacture a key material needed to refurbish the W76 warhead because the site had kept few records of the process when the material was made in the 1980s, and almost all staff with expertise on its production had retired or left the agency. NNSA’s efforts to address this information gap resulted in $69 million in cost overruns and a schedule delay of at least 1 year that presented significant logistical challenges for the Navy. KCP’s knowledge preservation program should help avoid similar problems. In addition, KCP has developed a strategy to estimate the cost and time needed to requalify production and testing equipment after the move, which will help to ensure that the equipment continues to produce components of the same quality as before the move. In particular, KCP officials have identified all equipment that they believe will need to be requalified and determined how the move will affect this equipment, which in turn will affect how extensive requalification efforts will need to be. For example, officials assessed how the production process will change as a result of KCP purchasing new equipment or outsourcing production. Changes in environment, such as temperature and humidity levels, could also affect equipment and production. KCP officials also estimated how long requalification will likely take, based on previous requalification efforts, and have been meeting with design laboratories since July 2006 to plan and budget requalification efforts and to communicate overall plans for KCP’s transition. However, officials at Sandia stated that they are concerned that they may not have sufficient funds to assist with requalification activities within KCP’s scheduled time frames. KCP officials estimate that requalification activities will cost KCP about $20 million, while Sandia estimates that their support of requalification activities will cost Sandia an additional $40 million for fiscal years 2008 through 2013. In fiscal year 2009, the design laboratories’ budgets did not include funding for requalification at KCP; the laboratories have requested funding for fiscal year 2010. KCP officials acknowledge that if funding is not available, requalification efforts will be delayed, which will significantly delay its production schedule. Nevertheless, KCP believes that NNSA is committed to the KCP transition and will provide adequate funding to the design laboratories to support requalification. Accordingly, KCP is continuing to coordinate with the laboratories to estimate requalification needs. KCP officials have also made plans to provide additional capability and capacity at the new facility to produce components that implement new technology or to reestablish the production of outsourced components if necessary. Specifically, KCP designed the facility so that it can add second and third work shifts, which may allow it to increase production of some components if needed. KCP has also dedicated about 10 percent of the facility’s total square footage—about 100,000 square feet out of the facility’s 1 million total square feet—to unused space that can be quickly and cost effectively converted for new capabilities or expansion of existing ones. KCP officials stated that they designed the new facility with a more open manufacturing space and modular utility systems so that it can be quickly and inexpensively reconfigured to adapt to changing production needs. In contrast, reconfiguring KCP’s current facility would require extensive modification of electrical, heating, and cooling systems, including moving, penetrating, or bypassing concrete walls, floors, and ceilings. KCP has also retained the capability to produce certain components that it currently outsources, which will allow it to reverse the decision to outsource those components and more quickly resume production internally if necessary. For other components, however, KCP officials have determined that there are many private suppliers with similar production capabilities. As a result, KCP will not retain the ability to produce these components or execute such processes. KCP plans to reduce the size of its available stored inventory space from nearly 300,000 square feet to 60,000 square feet—a total reduction of about 240,000 square feet, or about 80 percent. This will be accomplished by higher-density storage and disposition of obsolete and surplus inventory. KCP officials are currently in the process of identifying surplus inventory, which they define as items that have not been used in the last 2 years or have no demand anticipated in the next 10 years. As of February 2009, KCP had identified from 8,000 to 9,000 parts as surplus inventory. Sandia design engineers are concerned that KCP may discard critical equipment that could be expensive and difficult to re-create if it were needed again in the future. However, according to KCP officials, most equipment stored at KCP is so outdated that it would cost more to repair the equipment than the equipment is worth. Moreover, KCP officials said that they have consulted periodically with design engineers as part of the review process and that before disposal of items is authorized, NNSA will distribute a list of excess items to all nuclear weapons complex sites to confirm that KCP does not need to retain these items. KCP officials said that they plan to continue to coordinate with the design laboratories as they reduce inventory. As part of KCP’s plans to ensure a smooth transition to its new facility, KCP officials are working to develop a comprehensive project schedule that details when relocation activities will occur, how long they will take, and how they are interrelated. The schedule provides a road map for the move and a means for gauging progress and identifying potential problems. KCP officials stated that they have not established a formal baseline of the schedule because the construction portion of the schedule is not firm. We assessed KCP’s initial schedule in February 2009 and found that KCP did not fully adhere to GAO-identified best practices for schedule development. We assessed its revised schedule in July 2009 and found that KCP officials have taken steps to address some of the problems identified in our initial review, but that the schedule still has some shortcomings. We assessed KCP’s relocation schedule based on the nine best practices we have identified for effective schedule estimating: (1) capturing key activities, (2) sequencing key activities, (3) assigning resources to key activities, (4) establishing the duration of key activities, (5) integrating key activities horizontally and vertically, (6) establishing the critical path for key activities, (7) identifying “float time”—the time that activities can slip before the delay affects the completion date, (8) performing a risk analysis of the schedule, and (9) updating the schedule using logic and durations to determine dates. Most of these practices are also identified by DOE in recent guidance on establishing performance baselines. Appendix II contains more details on GAO’s best practices for scheduling and a description of our assessments. Our assessment of KCP’s February 2009 schedule revealed that KCP did not meet three of these best practices and only partially met five. For example, we found that KCP’s schedule did not reflect resources—such as labor, material, or overhead—required to complete each activity, which is important to determine the feasibility of the schedule based on available resources. Further, KCP officials told us that they did not intend to conduct a risk analysis of the schedule, which, according to best practices, is important to predict the level of confidence in meeting a program’s completion date and to identify high-priority risks. In addition, our assessment revealed that KCP established excessively long time frames for some very broad activities—275 activities had time frames of over 200 days in length. According to best practices, activity durations should be as short as possible. In April 2009, we provided KCP and NNSA officials with our assessment of the February schedule. Although KCP officials provided additional context about their particular schedule situation, they acknowledged that the pre- baselined schedule was not yet complete and expressed an intention to work toward ensuring that the relocation schedule better conforms to GAO-identified best scheduling practices. For example, KCP officials acknowledged that they did not assign resources to activities in the schedule as suggested by GAO best practices, but explained that they planned to assign resources to certain activities in the schedule as well as track resources using other management systems as they complete a more detailed relocation plan in fiscal year 2011. KCP officials also explained that scheduling is time-intensive, and that the schedule is updated and improved daily. For example, KCP officials told us that they are in the process of reducing the number of activities with excessively long durations by, among other things, splitting longer duration activities into more detailed and shorter tasks as more information becomes available. KCP officials further explained that most of the activities with long durations are well into the future and cannot be accurately broken into smaller segments until some near-term activities are completed. In addition, KCP officials explained that although they have not performed a formal risk analysis on the schedule, they use alternative methods to identify and reduce scheduling risks. For example, KCP officials told us that scheduling officials consult subject matter experts to provide estimates for the duration of activities, which they believe will be successfully executed within those time frames. Further, KCP officials asserted that they are monitoring schedule risks at the project and program levels through a separate database. In our review of KCP’s revised schedule in July, we found that KCP had taken steps to address some of the problems we identified; however, the schedule still does not fully adhere to GAO’s best practices. Specifically, KCP improved the schedule in several areas. For example, KCP’s February schedule did not fully resolve some key activities on the critical path—the path of work that must be completed as planned if the projected completion date is to be met. To correct this, KCP’s July schedule included additional information on the lease award and other activities in the critical path that more realistically depicts KCP’s overall expected completion date for relocation—October 2013, about a 1-year delay from NNSA’s original commitment to Congress. However, a few practices that KCP’s initial schedule either did not meet or only partially met did not significantly improve. For example, although KCP officials monitor schedule risks in a separate database, they do not plan to conduct a risk analysis using statistical analysis techniques as suggested by GAO- identified best practices. Table 2 summarizes the progress KCP made from February through July 2009. The timely implementation of KCP’s relocation schedule is critical to ensure that the relocation occurs on time and does not risk disruption of component production. In particular, the relocation is scheduled to occur during a large production run for the W76 life extension program, which began 2 years ago and is scheduled to last at least 10 more years. An NNSA W76 program manager stated that the relocation was planned without substantial input from him and that KCP may have missed opportunities to reduce risks associated with the relocation. For example, if officials had delayed the relocation by 2 years as he would have recommended, KCP could have reduced potential disruptions to the life extension program for the W76 nuclear warhead. Moreover, the program manager stated that any schedule delays during the relocation will likely cascade to an already tight production and delivery schedule. KCP has begun taking steps to address outsourcing risks, such as potential interruptions to supply sources; unanticipated price increases; and quality assurance problems, including counterfeiting and sabotage. However, KCP lacks a formal, risk-based approach to identifying and mitigating risks posed by components and technologies, including weapons-related design drawings; unique production processes; and information that although mostly unclassified, could be used by adversaries to develop or advance their nuclear capabilities. Sandia design engineers that we interviewed identified several general risks of outsourcing that could jeopardize the quality or safety of nuclear weapons or affect KCP’s schedule or costs, and KCP has begun taking steps that seek to mitigate many of those risks. Specifically: Loss of a supplier. Sandia officials stated that relying on one supplier to produce a particular component can be risky because if a supplier can no longer produce components for KCP because of business failures, loss of interest in working with KCP, a natural disaster, or other reasons, production may be delayed while KCP identifies an alternative supplier or reestablishes production capabilities on-site. To mitigate this risk, KCP is developing a pool of capable suppliers for outsourced components so that it can quickly move production to another qualified supplier, if necessary. For example, when KCP officials decided to outsource a plating process— the process of coating electrical and mechanical products to improve their mechanical properties and protect against corrosion—they identified 1 primary supplier and 4 backup suppliers out of a potential pool of more than 2,000 suppliers that could be called upon if the primary supplier could no longer meet KCP’s needs. KCP officials told us that they also review potential suppliers’ financial stability and eliminate those companies with financial concerns from consideration. Price increases. According to Sandia officials, suppliers could increase their prices, which could cause an unanticipated increase in KCP’s manufacturing costs. To mitigate this risk, KCP officials told us that they include cost thresholds in their long-term purchase agreements and validate the reasonableness of the component price by comparing it with those of direct competition. Quality assurance problems—including counterfeiting and sabotage. Sandia officials stated that KCP is likely to have less direct control over outsourced production processes, which could lead to quality assurance problems, including an increased risk of counterfeiting and sabotage. KCP has, on occasion, experienced poor quality results from suppliers, which has required rework or changes in suppliers. Sandia officials also stated that they are increasingly concerned about the potential for KCP to unintentionally purchase counterfeit parts. For example, an expansive black market exists for some microelectronics, particularly in Southeast Asia. Sandia officials stated that counterfeit parts are becoming increasingly more sophisticated, thereby requiring more expertise to detect. Sandia officials also stated that suppliers may sabotage a component to undermine the reliability of a nuclear weapon. To mitigate these risks, KCP officials, sometimes accompanied by design engineers, have conducted periodic quality reviews, including scheduled and unannounced visits to some suppliers’ production sites. According to KCP officials, the frequency and type of these reviews depends on, among other things, the components’ degree of customization and the ease of inspection—in some cases, components must be destroyed while undergoing inspection, which is known as destructive testing. KCP officials have reportedly observed some suppliers’ production processes and overall quality of operations to verify that suppliers adhere to industry standards and follow proper production techniques, such as using appropriate levels of electrical voltage when manufacturing certain components. KCP officials also have tested components for problems with quality, including counterfeiting and sabotage. However, Sandia officials stated that testing might not always effectively reveal counterfeit parts or attempted sabotage. Although KCP officials said they do not outsource components that have the potential for sabotage that their tests cannot detect, KCP’s efforts to restrict outsourcing of these components is not infallible. Although KCP’s outsourcing process considers the security of a component and there is no evidence that sabotage has occurred in any components KCP has procured, KCP’s outsourcing process lacks criteria and steps for determining and mitigating the risk of a component being counterfeited or sabotaged—a crucial feature of an effective risk-based approach. KCP officials have previously outsourced highly customized and preassembled components that cannot be easily inspected, potentially increasing the chance of counterfeited or sabotaged components going undetected. KCP has not implemented a systematic review process to identify specific components, technologies, and information that although not considered to be classified national security information, are subject to export controls and could be used to advance the nuclear capabilities of adversaries. Although DOE guidance states that KCP should conduct a review to identify the components, technology, and information that could potentially advance the nuclear capabilities of potential adversaries, KCP and NNSA’s site office have not conducted such a review. KCP and NNSA officials stated that they have not conducted such a review because NNSA’s current interpretation of export control regulations is that all components used in nuclear weapons should be considered subject to the regulations. If this were not the case, the officials stated that it would be both difficult and time-consuming to make individual export control determinations for each of the many components produced or outsourced at KCP given the officials’ perception of the lack of clarity in the regulations and would add little value to their current approach. Specifically, DOE issued guidance in 1999 to help DOE and its contractors to implement a consistent policy regarding transfers of unclassified equipment, materials, and technology that could adversely affect U.S. security or lead to the proliferation of weapons of mass destruction. This guidance specifies the need for an export control review to identify such equipment, materials, and technology, among other things, that could pose proliferation risks. NNSA officials stated that although these guidelines are not requirements, they would be appropriate for KCP to use in its outsourcing decisions. Furthermore, the DOE guidance states that the NNSA site office manager at KCP is responsible for ensuring that KCP performs export control reviews. As outsourcing increases and additional individuals gain access to nuclear weapons design and production information, potential adversaries could gain access to information that could be used to advance their own nuclear capabilities. KCP officials estimate that about 10 percent of the components KCP produces or procures would likely be considered high risk if a program of review existed, and acknowledged that they have not conducted a review to systematically evaluate the level of risk for each component. Instead, KCP officials stated that they treat each component and the associated design information as if they pose equal proliferation risks and are subject to International Traffic in Arms Regulations—the regulations controlling the exports of munitions items and technologies. As such, items that pose little apparent risk of contributing to potential adversaries’ development of nuclear weapons, such as a commercially available screw, are considered to be the same level of risk as complex components, such as a mechanism designed to arm nuclear weapons. As a precautionary measure, KCP officials stated that they produce and assemble most of the more complex and higher-level components in- house, reserving outsourcing for components that are more commercially available, less complex, and at lower stages of assembly. Nevertheless, we observed that KCP officials currently outsource the production and assembly of several components that they determined to be of higher complexity and assumed the components were subject to export control requirements but did not conduct a systematic assessment of the components’ actual proliferation risk. Without a systematic review process to identify which components and technologies—including weapons- related design drawings, unique production processes, and other information—pose greater threats, KCP may be missing opportunities to restrict certain outsourcing activities and mitigate the risk associated with sharing critical information that could be used use to develop or enhance an adversary’s nuclear weapon capabilities. Furthermore, KCP’s primary export control measure rests on its suppliers’ compliance with a contract clause outlining their responsibility to abide by export control laws and safeguard nuclear weapon component production and design information. The contract clause informs external suppliers of the potential applicability of export regulations and notifies suppliers that they must report any information that may require an export license or other forms of approval. For example, KCP outlines expectations for its suppliers, including to (1) disclose their intent to export a component or hire foreign nationals that might be exposed to the component or its design-related information and (2) fully comply with all export control laws and regulations. In some instances, self- reporting has allowed KCP to identify and mitigate a risk. For example, when one of KCP’s domestic suppliers moved its operations to Mexico, KCP officials were faced with the decision whether to switch suppliers or retain the now foreign-based supplier. To mitigate concerns about working with a foreign-based company, KCP officials told us that they reevaluated the design of the component and decided to continue purchasing less-sensitive parts of the component—such as a type of connector—from that supplier, but found another domestic-based supplier to produce other more sensitive parts of the component—such as a particular type of cable. KCP officials told us that they took these actions after the supplier reported its relocation plans to KCP, as required in its contract. However, according to Sandia officials, supplier self-reporting has not always been a reliable approach. In another recent case, Sandia learned that a supplier was foreign owned only after it had already procured parts from that supplier, which led to additional costs, schedule delays, and other problems that eventually forced Sandia to produce the component in-house. DOE and NNSA lack clear and up-to-date export control guidance. As a result, NNSA has not clearly communicated to KCP its expectations of what a systematic and consistent export control review process should include, or ensured that the specific components, technologies, and information that could be used to advance the nuclear capabilities of potential adversaries are identified. For example, NNSA officials that we spoke with noted that DOE’s 1999 export control guidance is outdated. In 2005, NNSA officials determined that DOE’s guidance needed to be updated, but the guidance revision was never completed. In addition, because the export control guidance is not tailored for NNSA production and laboratory sites, NNSA lacks firm criteria for conducting oversight of export control activities. KCP officials further explained that DOE’s guidance is not helpful in interpreting the Commerce Control List, which is made up of broad categories that are not always specific to nuclear weapons technologies. For example, the Commerce Control List identifies sensors as a controlled technology; however, according to KCP officials, several items may fit that category, including items that could be used by potential adversaries to advance their own nuclear weapon capabilities as well as those that would not pose such a threat, such as a simple thermometer or rain gauge. Further, KCP officials stated that DOE’s guidance does not clearly define laboratories’ and production sites’ responsibilities or provide a clear determination of who is responsible for identifying the components that are subject to export control. In particular, both KCP officials and design engineers told us that it is unclear whether KCP or the laboratories should determine the level of risk and how that risk should be communicated, such as how each design drawing should be labeled. KCP officials suggested that if design engineers identified the portions of the design drawing that may require more careful export control consideration, it would help them determine effective export risk mitigation steps. One KCP official stated that there is also considerable risk in asserting that a component is not subject to International Traffic in Arms Regulations, especially given the sensitivity and risk-averse nature of the nuclear weapons community. As a result, KCP has defaulted to treating all components as being of equal risk and subject to these regulations and has taken no specific actions to identify and mitigate the greatest risks. KCP has made substantial progress toward achieving NNSA’s overall goals to modernize its nonnuclear production facility and ensure continued production of quality components essential to maintaining the U.S. nuclear weapons stockpile. However, shortcomings in NNSA’s oversight of KCP’s relocation may offer lessons for future modernization efforts at its nuclear weapons facilities. In particular, NNSA allowed KCP to limit its cost analysis to a 20-year life cycle that has no relationship with known requirements of the nuclear weapons stockpile or the useful life of a production facility that is properly maintained, and did not require that KCP consider the full useful life of the facility in its analysis. As a result, NNSA’s financing decisions were not as fully informed and transparent as they could have been. If KCP had quantified potential cost savings to be realized over the longer useful life of the facility, NNSA may have made a different decision. Further, because NNSA has not ensured that KCP’s relocation schedule fully complies with DOE schedule development guidance and GAO-identified scheduling best practices, there is a potential for delays. A delay in KCP’s relocation could affect the timely delivery of replacement components needed to maintain a reliable nuclear weapons stockpile, which, in turn, could have a detrimental effect on national security. Moreover, DOE and NNSA lack clear and up-to-date export control guidance that articulates NNSA’s expectations of what a systematic and consistent export control review process should include. Because of this, KCP is not required to take—and therefore has not taken—proactive steps to identify specific components, technologies, and information that could be used to advance the nuclear capabilities of potential adversaries. Furthermore, without export control requirements that are designed specifically to meet NNSA production and nuclear weapons design laboratory needs, and an effective mechanism for ensuring enforcement of these requirements within NNSA, NNSA site offices are less able to (1) mitigate the risks associated with outsourcing components and (2) exercise effective oversight. We recommend that the Secretary of Energy take the following five actions to strengthen NNSA’s oversight practices and current and future facility modernization efforts. To improve the transparency and usefulness of cost analyses prepared for future NNSA nuclear facilities modernization projects, we recommend that the Secretary of Energy direct the Administrator of NNSA to ensure that life cycle cost analyses include a thorough and balanced evaluation of short- and long-term construction and financing alternatives. Such analyses should consider the full useful life of the facility rather than the 20-year requirement for GSA leases or any predetermined length of time that might produce results that favor one option over another. To better manage the KCP relocation schedule, we recommend that the Secretary of Energy direct the Administrator of NNSA to ensure that KCP’s operating contractor revise the KCP relocation schedule so that it is consistent with DOE schedule development guidance and GAO-identified scheduling best practices, as outlined in appendix II. Because of the importance of mitigating the risks of outsourcing nuclear weapons components and other information that if exported, might allow potential adversaries to develop or advance their nuclear capabilities, we also recommend that the Secretary of Energy direct the Administrator of NNSA to take immediate action to: Assess the effectiveness of NNSA’s oversight of KCP’s current export control and nonproliferation practices and, if appropriate, initiate corrective actions to strengthen that oversight. In collaboration with the Departments of State and Commerce, replace or supplement DOE’s July 1999 Guidelines on Export Control and Nonproliferation with guidelines, or another form of directive as deemed appropriate by the agencies, that (1) clarify expectations for export control reviews to specifically meet NNSA production and nuclear weapon design laboratory needs and (2) contain an effective mechanism for ensuring enforcement of these export control guidelines within NNSA. Direct the KCP operating contractor to develop and implement a formal risk-based review process in cooperation with the nuclear weapons design laboratories that (1) identifies specific components, technologies, production processes, and related information that if exported, might allow potential adversaries to develop or advance their nuclear capabilities and (2) includes steps for mitigating these risks, particularly for considering whether or how to outsource these items. We provided NNSA with a draft of this report for its review and comment. In its written comments, NNSA states that our review was thorough and that we appropriately recognized NNSA’s progress toward achieving the overall goals to modernize its production facility. NNSA also provided additional information on its overall approach for modernizing the KCP facility. NNSA generally agreed with our five recommendations and outlined some initial actions that it expects to take to address each of them. NNSA provided its most substantive comments on our findings and recommendations concerning export control. Specifically, although agreeing with our three export control recommendations, NNSA stated that it will delay action on them until an export control working group that it created in July 2009 completes its analysis of export licensing and other related issues. While we believe NNSA’s formation of a working group to study these export control issues is a positive first step toward improving its export control practices, it is important that NNSA not unduly delay taking action to mitigate nuclear proliferation risks associated with outsourcing nuclear weapons components and information. Regarding our finding that KCP lacks a formal, risk-based approach to safeguard components and technology that could be used by potential adversaries, NNSA commented that KCP officials do not feel that additional outsourcing increases risk or that a more rigorous review would necessarily lead to different outsourcing decisions. However, as our draft report noted, without knowing which components pose the greatest risks, NNSA cannot be certain that it is focusing its efforts to safeguard the highest-risk components and technologies in the most effective manner. With regard to our recommendation that NNSA assess the effectiveness of its oversight of KCP’s current export control and nonproliferation practices, NNSA responded that the correct export control requirements are being applied through its management and operating contract for KCP. Specifically, the management and operating contractor (Honeywell) uses standard export compliance clauses in supplier purchasing agreements to put suppliers on notice as to the requirements applicable to them. However, in our view, simply relying on the use of such clauses is not oversight. Because NNSA has the primary responsibility of preventing the proliferation of nuclear weapons, it is important that NNSA consider adopting a risk-based approach that could enhance existing export control requirements. NNSA’s comments are reprinted in appendix III. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretaries of Energy, State, and Commerce; the Administrator of NNSA; and the Director, Office of Management and Budget. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to determine (1) how the Kansas City Plant (KCP) developed plans for modernizing its facility, (2) actions KCP has taken actions to ensure uninterrupted production of components needed to support the nuclear weapons stockpile during and after the transition to the new facility, and (3) actions KCP has taken to address the risks and potential consequences of increased outsourcing of nonnuclear components. To determine how KCP developed plans for modernizing the facility, we reviewed documents from the National Nuclear Security Administration (NNSA) and Honeywell Federal Manufacturing and Technologies (Honeywell), which manages and operates the KCP facility for NNSA, that describe the project’s goals, approach, and rationale for key decisions on relocating and outsourcing the production of nonnuclear components. We also interviewed officials at NNSA’s Kansas City Site Office, Honeywell, relevant subcontractors, and component design laboratories about KCP’s relocation plans, approach, and time frames, including how the relocation might affect continued production of high-quality components and the risks posed by the current approach. Under our long-standing policy of not addressing issues in ongoing litigation, we did not evaluate KCP’s analysis of relocation alternatives because a lawsuit was filed in October 2008 that, among other things, challenged the extent and adequacy of the Department of Energy’s (DOE) consideration of alternatives to its plans for replacing the KCP facility. To determine the actions KCP has taken to ensure uninterrupted production of components needed to support the nuclear weapons stockpile during and after the transition to the new facility, we reviewed agency and contractor documents describing transition plans. We talked to officials at Sandia National Laboratories (Sandia) in New Mexico— which designs the nonnuclear components that are produced at KCP— about the impact of KCP’s plans on the quality, reliability, and future support of the nuclear weapons stockpile. We evaluated the reliability of KCP’s relocation schedule to determine the extent to which it captures key activities, is correctly sequenced, establishes the duration of key activities, is integrated, and has an established reliable critical path, among other things. We conducted an initial assessment in February 2009, and conducted a second assessment in July 2009 to evaluate the extent to which the schedule improved over time. We based our assessment on GAO-identified best practices associated with effective schedule estimating, many of which are also identified by DOE in its guidance on establishing performance baselines. To assess KCP’s schedule, we consulted with a scheduling expert and interviewed key program officials responsible for developing this schedule. To determine the actions KCP has taken to address the risks and potential consequences of increased outsourcing of nonnuclear components, we reviewed agency and contractor documents, including KCP’s outsourcing strategy and export control process. We also reviewed DOE’s Export Control and Nonproliferation Guidelines, as well as relevant export control laws and regulations. In addition, we interviewed key KCP and Sandia officials to understand potential risks associated with outsourcing and KCP’s approach for mitigating these risks, including nuclear proliferation risks. We met with NNSA site office officials responsible for overseeing KCP nuclear nonproliferation activities, and headquarters officials that provide guidance and nonproliferation expertise to site offices across NNSA. We conducted this performance audit from November 2008 through October 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The schedule should reflect all activities as defined in the program’s work breakdown structure, to include activities to be performed by both the government and its contractors. Criteria not evaluated: The schedule appears to contain most necessary activities; however, we were unable to verify whether all activities were included because of incomplete data and the need to clarify details on the data we received. Criteria partially met: The schedule appears to contain most necessary activities; however, a supplemental dictionary that defines activities did not include sufficient detail for us to conclude that the schedule includes all the activities to be performed. The July schedule has a total of 5,592 activities. Sequencing all activities The schedule should be planned so that it can meet critical program dates. To meet this objective, key activities need to be logically sequenced in the order that they are to be carried out. In particular, activities that must finish before the start of other activities (i.e., predecessor activities) as well as activities that cannot begin until other activities are completed (i.e., successor activities) should be identified. By doing so, interdependencies among activities that collectively lead to the accomplishment of events or milestones can be established and used as a basis for guiding work and Sequencing all activities measuring progress. Criteria partially met: KCP’s use of constrained tasks has been reduced but not eliminated. Specifically, we found that since February 2009 KCP reduced the extent to which the following tasks were constrained: the number of tasks with early start constraints was reduced from 212 in February to 144 in the July schedule; 212 tasks with early start constraints—that is, a “start no earlier than” date; the number of tasks with no successors was reduced from 2,121 in February to 200; 2,121 activities with no successor activities; 352 activities with no predecessor activities; the number of tasks with no predecessor activities was reduced from 352 in February to 21; 102 lags—which are the duration between activities that delay successor activities; and the number of lags was reduced from 102 in February to 21, although some lags are still excessively long—from 240 to 422 days; and 850 negative lags––which allow the start or finish of a successor activity to occur earlier than the start or finish of a predecessor activity. the number of tasks with negative lags was reduced from 850 in February to 20. The schedule should reflect what resources (i.e., labor, material, and overhead) are needed to do the work, whether all required resources will be available when they are needed, and whether any funding or time constraints exist. Criteria not met: The schedule did not include resources; therefore, it is not clear that this schedule is feasible. Criteria not met: KCP’s July schedule still does not include resources. Although the agency asserted that it has other systems to track resource use and to determine future resource needs, according to scheduling best practices, resources should be included in the schedule. The schedule should realistically reflect how long each activity will take to execute. In determining the duration of each activity, the same rationale, data, and assumptions used for cost estimating should be used. Further, these durations should be as short as possible and they should have specific start and end dates. Excessively long periods needed to execute an activity should prompt further decomposition of the activity so that shorter execution durations will result. Criteria partially met: The schedule included 138 activities with over 260 days’ duration, which is approximately 1 year given a 5-day calendar. In addition, we found that KCP included 275 activities over 200 days in length. It is difficult to manage activities of this length and to know if these are realistic durations or how they were determined. According to best scheduling practices, durations should be as short as possible. Criteria mostly met: Activity durations have been reduced, although some still remain long. For example, KCP has since reduced the number of activities with over 260 days duration from 138 in February to 57 activities in its July schedule. In addition, KCP has reduced the number of activities over 200 days in length from 275 in February to 204 in its July schedule. According to KCP officials, most of the long duration activities are well into the future and cannot be accurately decomposed until some near- term planning activities are completed. The schedule should be horizontally integrated, meaning that it should link the products and outcomes associated with already sequenced activities. These links are commonly referred to as handoffs and serve to verify that activities are arranged in the right order to achieve aggregated products or outcomes. The schedule should also be vertically integrated, meaning that traceability exists among varying levels of activities and supporting tasks and subtasks. Such mapping or alignment among levels enables different groups to work to the same master schedule. Criteria partially met: The schedule is not horizontally integrated. There are excessive instances of incomplete logic where activities have no successors. In addition, there is no evidence that the schedule is vertically traceable to other levels of activities, supporting tasks, or subtasks. Criteria mostly met: KCP has made significant progress; however, because of the continued use of constraints, lags, and incomplete logic, the schedule is still not fully horizontally integrated. KCP has demonstrated that the schedule is vertically integrated with supporting tasks and subtasks through an external document. Using scheduling software, the critical path—the longest duration path through the sequenced list of key activities—should be identified. The establishment of a program’s critical path is necessary for examining the effects of any activity slipping along this path. Potential problems that may occur on or near the critical path should also be identified and reflected in the scheduling of the time for high- risk activities. Criteria partially met: The critical path appears to be logical; however, with all of the other incomplete logic, as well as a large float value—the time that a predecessor activity can slip before the delay affects successor activities—the critical path is not reliable. In addition, in the February schedule, the critical path extended to September 27, 2010, and the lease award task was not on the critical path. Criteria mostly met: KCP has made progress on the critical path. In KCP’s July schedule, the critical path was extended to October 5, 2013, and the lease award task was added, as appropriate. However, the continued incomplete logic and large float value continue to affect the validity of the critical path. The schedule should identify float, so that schedule flexibility can be determined. As a general rule, activities along the critical path typically have the least amount of float. Criteria partially met: The schedule calculates total float values––the time that activities can slip before the delay affects the end date––automatically. However, there are more than 4,800 activities with total float over 200 days. In addition, in the February schedule, there were 390 activities with over 1,000 days of total float. These values do not seem reasonable for a project schedule and probably are due to the excessive use of constraints and incomplete logic. Criteria partially met: Progress has been made on reducing total float values, although they remain higher than expected. For example, KCP reduced the number of activities with over 200 days of total float from 4,800 in February to 2,964 in the July schedule. In addition, KCP has since reduced the number of activities with over 1,000 days of total float from 390 to 147. A schedule risk analysis should be performed using a good critical path method schedule and data about project schedule risks, as well as statistical analysis techniques (such as Monte Carlo) to predict the level of confidence in meeting a program’s completion date. This analysis focuses not only on critical path activities but also on activities near the critical path, since they can potentially affect program status. Criteria not met: KCP has not performed a schedule risk analysis using statistical techniques. KCP officials reported that they have no plans to address this issue. Criteria not met: KCP’s schedule has not been subjected to a statistical risk analysis. KCP’s scheduling team also indicated that it does not have plans to conduct statistical analyses on the schedule. Although KCP officials stated that they have conducted an analysis on a separate spreadsheet, it does not provide sufficient confidence in meeting a program’s completion date. The schedule should use logic and durations in order to reflect realistic start and completion dates for program activities. The schedule should be continually monitored to determine when forecasted completion dates differ from the planned dates, which can be used to determine whether schedule variances will affect downstream work. Maintaining the integrity of the schedule logic is not only necessary to reflect true status, but is also required before conducting a schedule risk analysis. Criteria partially met: The schedule appears to have been updated recently; however, because of the incomplete logic and reliance on lags, the dates for future activities are not reliable. Because the dates are not all calculated automatically, the schedule cannot be used to monitor changes in forecasted completion. Therefore, we could not determine with confidence whether schedule variances will affect downstream work. Criteria mostly met: There are still 14 instances of incomplete logic—where successor activities start before predecessor activities have been completed. In addition to the contact named above, Ryan T. Coles, Assistant Director; Antoinette Capaccio; Tisha Derricotte; Terry Dorn; David T. Hulett; Sandra Kerr; Amanda Krause; Alison O’Neill; Christopher Pacheco; Tim Persons; Jeff Phillips; and John Smale made key contributions to this report. | Built in 1943, the Kansas City Plant (KCP)--the National Nuclear Security Administration's (NNSA) primary production plant for manufacturing nonnuclear components of nuclear warheads and bombs--is to be modernized because of its age and the high cost of maintenance and operation. Among other changes, NNSA plans to relocate KCP to a new facility and increase components obtained from external suppliers from about 54 to 70 percent. KCP's continued supply of these components is essential for maintaining a reliable nuclear weapons stockpile. GAO was asked to determine (1) how KCP developed plans for modernization, (2) actions KCP has taken to ensure uninterrupted production of components, and (3) actions KCP has taken to address the risks of outsourcing. GAO reviewed planning documents and met with officials from NNSA, KCP, and Sandia National Laboratories, which designs many of the components produced at KCP. KCP evaluated several alternatives on behalf of NNSA to modernize its facility based on whether the alternative (1) was consistent with NNSA's goals for maintaining a smaller facility for producing nuclear weapons and one that could quickly adapt to change, (2) met NNSA's commitments to Congress to operate a new facility by 2012, and (3) minimized costs and implementation risks. Based on KCP's analyses of alternatives, NNSA chose to have a private developer build a new building in Kansas City 8 miles from the current facility, which NNSA would then lease through the General Services Administration (GSA) for a period of 20 years. However, in evaluating a financing method, KCP compared alternatives using cost estimates limited to 20 years. Twenty years is far shorter than the useful life of a production facility that is properly maintained; the current facility has operated for more than 60 years. NNSA and KCP officials acknowledge that while leasing a facility through GSA under a 20-year scenario is less costly than purchasing, it can be more costly over the longer term. Because KCP's analysis did not consider costs beyond 20 years, NNSA cannot be certain if other alternatives, such as purchasing the facility, might have offered lower costs over the longer term. KCP officials developed extensive plans to ensure that the production of components is not interrupted because of the transition to the new facility. However, its schedule--which is critical to ensuring that the move does not disrupt production--does not fully adhere to best practices GAO identified for schedule development and related DOE scheduling guidance. In February 2009, GAO assessed KCP's schedule and found that, among other things, KCP had not adequately sequenced all activities in its schedule in the order in which they are to be carried out. GAO followed up in July 2009 and found that although KCP officials have made progress in addressing several of these problems, the schedule still has some shortcomings. KCP has taken steps to mitigate some risks of increased outsourcing, but NNSA has not provided adequate oversight or clear and up-to-date export control guidance tailored for NNSA production and laboratory sites to effectively manage associated nuclear weapons proliferation risks. As such, KCP has not implemented a formal, risk-based approach to identify specific components and technologies that may be used by potential adversaries to develop or advance their nuclear capabilities. Lacking effective NNSA-specific guidance and a risk-based approach, KCP instead treats all components as if they pose equal proliferation risks. As such, items such as a common, commercially available screw are considered to be at the same level of proliferation risk as a complex mechanism designed to arm nuclear weapons. Further, KCP's primary means of addressing this issue rests on its suppliers' self-enforced compliance with a contract clause that outlines the suppliers' responsibility to abide by applicable export control laws. Under this broadly applied approach to managing export control--where all components are treated as equal risks--NNSA may be missing opportunities at KCP to systematically identify and more effectively mitigate those risks that pose the greatest threats. |
SBA, an independent agency within the executive branch, offers numerous programs and services to owners of small businesses in the United States and in U.S. territories. The programs and services are meant to help aid, counsel, assist, and protect the interests of small business concerns and include a variety of loan guarantee programs, surety bond guarantees, technical assistance, and other services targeted to small businesses in general and to women-owned and minority-owned businesses in particular. SBA’s headquarters is in Washington, D.C., but SBA’s programs and services are delivered through 97 regional, district, branch, or post-of-duty offices located throughout the United States and in Guam, Puerto Rico, and the Virgin Islands. These 97 offices are organized within 10 geographic regions. Just prior to reorganizing and downsizing, SBA employed about 5,600 full- and part-time employees, of whom about 1,000 were employed in the Washington, D.C., metropolitan area. By March 1997, as the major portion of reorganizing and downsizing ended, SBA employed about 4,750 full- and part-time employees, of whom about 750 were employed in the Washington, D.C., metropolitan area. As agreed with your office, our specific objectives were to determine the following: What was the status of SBA regional office employees following reorganization, and were SBA employees shifted to regional offices after those offices were downsized? Did SBA follow applicable policies and procedures when appointing--and setting the starting salaries of--individuals hired during the period January 1, 1993, through December 31, 1998, from outside of SBA for the position of District Director, and what procedures did SBA’s Office of Advocacy use in hiring Regional Advocates and Assistant Advocates during calendar year 1998? Did SBA follow applicable federal laws and regulations to set the salaries of, and provide salary increases to, political appointees and former congressional (Ramspeck Act) employees hired between October 1, 1991, and September 30, 1998? Did SBA adequately control the interagency detailing of its employees during fiscal years 1992 through 1998? What positions were newly created or abolished between March 1996 and March 1998 in SBA’s Office of the Administrator, Office of the Deputy Administrator, Office of the Chief of Staff, Office of the Chief Operating Officer, and Office of the Associate Administrator for Field Operations; and what were the sources of appointees to the newly created positions and the status of those employees in the abolished positions? What was the status of SBA’s response to a December 1997 congressional mandate to establish a Senior Executive Service (SES) position within SBA’s Office of Women’s Business Ownership? and Did SBA Regional Advocates attend a White House-sponsored political appointee meeting during fiscal year 1997? In addressing our seven objectives, we relied extensively on personnel listings provided to us by officials of SBA’s Office of Human Resources; examined SBA’s records of personnel actions, position descriptions, and conducted numerous interviews of SBA officials; and relied heavily on records and employment application material in the official personnel folders (OPFs) of those present and former SBA employees who were within the scope of our review. SBA provided us the names of (1) SBA field personnel and the offices to which they were assigned, both before and after SBA’s reorganization and downsizing; (2) district directors who were appointed between January 1, 1993, and December 31, 1998; (3) political and Ramspeck Act appointees who were appointed between October 1, 1991, and September 30, 1998; and (4) employees detailed from SBA to other federal agencies during the period October 1, 1991, through September 30, 1998. Some of the personnel action records we examined came from SBA’s personnel records database, which is managed and maintained for SBA by the U.S. Department of Agriculture’s National Finance Center in New Orleans, LA. We also examined travel authorizations and vouchers of SBA Regional Advocates to identify those who might have attended a political meeting at the White House. These authorizations and vouchers were for fiscal year 1997 and part of fiscal year 1998. We interviewed officials from the Office of Human Resources (HR) at SBA headquarters and from SBA’s Denver Human Resource office about many topics and issues, such as the reasons for certain personnel actions. We interviewed current and former officials from SBA’s Office of Advocacy about that Office’s hiring authority and procedures as well as to identify employees hired by the Office in calendar year 1998. For the current and former SBA employees who were within the scope of our review, we obtained the OPFs of current employees from SBA; and former employees from either the National Personnel Records Center in St. Louis, MO, or from the federal agencies where they were then working. In the rare cases in which we were unable to locate a former employee’s OPF, we relied instead on personnel records contained in SBA’s personnel database. (App. I contains a detailed description of our objectives, scope, and methodology.) We did our work in Washington, D.C.; Denver, CO; and St. Louis, MO; from March 1997 through January 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Administrator of SBA. The comments are discussed at the end of this letter and are reprinted in appendix VI. Regional offices were greatly affected by reorganization and downsizing. Staffing was drastically reduced, and some responsibilities that regional offices once held were transferred to SBA headquarters and district offices. Between September 1993 and April 1997, the 10 regional offices together lost 94 percent of their employees. Most of these employees were either reassigned to nonregional offices or retired. As a result of the significant reduction of employees in regional offices, employees from district offices have, from time to time, been temporarily assigned to regional offices. According to data SBA’s HR officials provided, a total of 504 employees were assigned to the 10 regional offices as of September 2, 1993, which was shortly before the reorganization and downsizing began. Of these 504 employees, we found after reorganizing and downsizing had ended that 10 employees remained with their regional offices; 306 employees remained with SBA but had transferred to district, branch, and other nonregional offices; and 163 employees had separated from SBA, most of whom retired. Information on the employment status of the remaining 25 employees was not available. The status of the 504 employees was as of April 1997, and we determined their status from personnel records. Appendix II provides further details on the status of the 504 employees. After the downsizing, according to SBA’s HR officials and officials from some of the regional offices we spoke with, district office administrative employees were shifted from time to time to colocated regional offices for temporary periods to provide administrative support. Most of the regional offices were located in the same buildings as the district offices. These officials explained that this shifting usually occurred when the regional office Staff Assistant was temporarily out of the office; that is, when sick or on vacation. In one case, however, a district office employee obtained a temporary promotion to the position of Public Affairs Specialist and was assigned to the colocated regional office. According to SBA’s HR officials, this was only a temporary measure because SBA wanted whoever would fill the then-vacant Regional Administrator position to participate in selecting the permanent Public Affairs Specialist. Unlike Regional Administrator positions, which are held by political appointees, SBA’s District Director positions are filled through career appointments. Normally, SBA fills vacant District Director positions by reassigning current SBA District Directors or by appointing graduates of the District Director candidate development program. However, at times, SBA hires new District Directors from sources outside of SBA. SBA made 46 appointments to the position of District Director between January 1, 1993, and December 31, 1998. Forty of these appointments went to SBA employees. For six appointments, SBA hired individuals from outside the agency to fill the positions. In two of these cases, the individuals’ application materials made reference to elected officials, thus introducing the possibility of political favoritism in the hiring process. However, SBA officials denied the use of favoritism. On the basis of available evidence, SBA appeared to have properly followed federal regulations in hiring all six individuals. However, we did note a problem with how SBA set the starting salary in one case. The six outside individuals were hired to fill District Director positions in Washington, D.C.; Cleveland, OH; Puerto Rico; Denver, CO; New Orleans, LA; and Sacramento, CA. They were not all hired at the same time--two were hired in 1994, one was hired in 1996, another was hired in 1997, and two were hired in 1998. Four were either employed by the federal government when they applied for the SBA position or had been federal employees some time earlier. We examined the steps SBA took in the appointment process and determined that SBA appeared to follow all relevant federal laws and regulations in making the appointments. SBA used a competitive examination process to hire each appointee. Under this process, SBA publicized the vacancy announcements for all six positions. SBA compiled lists of best-qualified applicants from among those who applied, and SBA management selected those hired (the appointees) from among the applicants who were on the best-qualified lists. The competitive examination process, it should be noted, does not preclude persons who have political experience and support from applying and being selected for open career positions. During our review, we became aware of possible political overtones in two cases, the Denver District Director position and the New Orleans District Director position. There were allegations that the individual appointed to the Denver position obtained the appointment through her political connections. She had worked for a governor of Colorado and, at the time of her appointment to the District Director position, the governor chaired the Democratic National Committee. According to an SBA memorandum, a panel of six officials interviewed her along with two other best-qualified applicants, and each interviewer found her to be the top candidate. One of the panel members, the Assistant Administrator for Human Resources, told us that there was no pressure on the panel to recommend this individual for selection. The application material submitted by the individual appointed to the New Orleans position cited the names of members of Congress and local politicians as references and contained letters of recommendation from local politicians. SBA HR officials said that except for the names in the application material, they were unaware of any political connections the individual had within the administration, and they were under no pressure of any kind to recommend this individual for selection. In neither of these cases did we find evidence of political favoritism and improper hiring. However, the question of whether such political favoritism or support has an impact on the selection of an individual from among other applicants is extremely difficult to determine and requires that the intent and motivation of the selecting official be known. Appendix III provides more information on each of the six cases. It also provides information about the appointments of SBA employees to District Director positions. Federal regulations govern the salary-setting process for new appointees. We determined that SBA HR officials properly followed regulations in setting the starting salaries of five of the six outside hired District Directors. However, in one of the six cases, SBA did not consider the use of a recruitment bonus when it set the District Director’s salary at a level higher than the minimum rate for the pay grade. Title 5 of the Code of Federal Regulations, part 531, section 203 (5 C.F.R. 531.203) states that new appointments are to be made at the minimum step of the pay grade. However, provisions exist for making an appointment above the minimum step. For example, a new employee’s salary can be set above the minimum step when the employee possesses superior qualifications or the agency has a special need for the employee’s services. The C.F.R. refers to an appointment under these circumstances as a “superior qualifications appointment.” However, the regulations also state that in determining whether a candidate should receive a superior qualifications appointment and, if so, at what level the employee’s pay should be set, the agency must consider the possibility of authorizing a recruitment bonus. The regulations also state that each agency that makes superior qualifications appointments must establish documentation and record-keeping procedures sufficient to allow reconstruction of the action taken in each case. According to the regulations, documentation must include the superior qualifications of the individual or the special need of the agency that justified use of the superior qualifications authority; the factors considered in determining the individual’s existing pay; and the reasons for authorizing an advanced rate instead of, or in addition to, a recruitment bonus. An appointee who has previously worked for the federal government is also eligible to receive a salary that is above the minimum step of the pay grade. According to regulations, the salary can be set above the minimum step in order to match the appointee’s previous highest federal salary or pay grade level. In the five cases where SBA followed federal pay-setting regulations, two cases involved appointments at the minimum step of the pay grade. Another case involved an SES appointment for which there is greater latitude in setting pay. A fourth case involved an appointment of a former SES member to a lower graded District Director position. And the fifth case involved the transfer of an employee from another federal agency to SBA with SBA matching his previous highest federal salary. In the one case in which SBA did not follow regulations, it set the individual’s salary above the minimum step of the pay grade. But in doing so, it did not consider the use of a recruitment bonus as required by applicable regulations. In commenting on a draft of this report, SBA said that it had made a policy decision not to offer recruitment bonuses. After we discussed this matter with SBA officials, however, SBA decided to implement a recruitment bonus policy. The Assistant Administrator for Human Resources told us that her Office had drafted a new standard operating procedure (SOP) that would cover the procedures for setting advanced salaries and considering use of recruitment bonuses. We obtained a copy of the draft procedures, which said that SBA would offer a recruitment bonus or a superior qualifications appointment only in those rare situations where, after extensive recruitment, SBA determined that an incentive was necessary to attract qualified applicants or to compete with nonfederal employers. The draft SOP also stated that SBA would always consider using a recruitment bonus before considering a superior qualifications appointment because a one-time payment was more economical than a higher salary that is paid every year. The draft SOP also states that information supporting the action is to remain on file for 3 years. According to SBA’s comments on the draft of this report, the SOP is now in SBA’s clearance process and is expected to be published this spring. According to information provided to us by Office of Advocacy officials, the Office hired four Assistant Advocates and five Regional Advocates in calendar year 1998. All nine advocates were hired under the special hiring authority that federal statute provides to the Chief Counsel for Advocacy.Under this special authority, the Office can hire individuals without having to use a formal competitive process. According to the Chief and Deputy Chief Counsels for Advocacy, candidates for the nine positions were identified either through word-of- mouth recommendations or through their personal knowledge of the individuals. Because all of the positions were excepted service positions being filled under the Chief Counsel for Advocacy’s special hiring authority, it was not necessary for SBA to publicly advertise the positions or to follow the other appointment procedures that apply when positions in the competitive service are filled. Because the individuals in all nine cases received excepted service, term appointments, they effectively served at the pleasure of the Chief Counsel for Advocacy. In addition, SBA set the salaries of all nine appointees in accordance with the administratively determined (AD) pay-setting authority provided by law to the Chief Counsel for Advocacy. In order to examine SBA’s salary setting and salary increase practices for political and former congressional (Ramspeck Act) employees, we reviewed available information regarding the appointments and salaries of all 310 political or Ramspeck Act appointees hired by SBA during the period October 1, 1991, through September 30, 1998. Of the 310 cases, 289 involved appointments of political appointees, and 21 involved appointments of former congressional employees hired by SBA under the Ramspeck Act. In 169 (55 percent) of the 310 appointments that we reviewed, SBA set the appointees’ starting salaries at the minimum step of the pay grades for the positions they accepted. In each case, on the basis of available evidence, it appeared that SBA followed appropriate salary-setting procedures. In 141 (45 percent) of 310 appointments we reviewed, SBA set the salaries at levels higher than the minimum step of the pay grades. In several of those cases SBA did not consider the use of recruitment bonuses and could not provide the documentation for the actions taken in each case as required by regulations. In another of those cases SBA set the advanced starting salary incorrectly and in still another case insufficient documentation was available for us to determine the basis SBA used in setting the salary. For essentially all of the 310 political and Ramspeck appointments, SBA appeared to have followed applicable rules in awarding periodic salary increases. Rules on setting salaries for, and providing salary increases to, career appointees are also generally applicable to political and Ramspeck Act appointees. Federal regulations permit agencies to provide appointees with advanced starting salaries on the basis of superior qualifications or highest previous pay. Furthermore, agencies have flexibility to set pay in cases involving SES appointments and in cases where AD pay authority is provided, e.g., the Office of Advocacy. Of the 141 cases in which advanced starting salaries were set, 69 involved SES appointments or involved appointments in which AD pay rates were authorized. In each of those cases, based on available evidence, it appeared that SBA set the starting salaries appropriately. In 43 cases the advanced pay rates were authorized by regulations on the basis of highest previous pay rates. In 1 of those 43 cases, discussed later, SBA set the starting salary incorrectly. In 28 cases the advanced pay rates were made on the basis of superior qualifications. In 11 of the 28 cases, however, SBA was unable to provide the documentation required by regulations to allow reconstruction of the action taken in each case. (In commenting on a draft of this report, SBA said it is conceivable the documentation that it had prepared was either misplaced or destroyed as the agency downsized.) Finally, in one case insufficient information was available for us to determine the basis SBA used when setting the advanced salary. For the 28 superior qualifications appointments that SBA made, SBA, as a matter of practice, did not consider on an individual case basis the possibility of authorizing a recruitment bonus. As stated earlier, SBA reported to us in its agency comments on a draft of this report that it had decided as a matter of policy not to offer recruitment bonuses. In a followup conversation with an SBA HR official we were told that this was an unwritten policy. The official believed the policy decision was made sometime after October 1990 but didn’t remember the precise date. Also, no substantial rationale for the decision was offered beyond that regulations require a recruitment bonus plan be established before paying a recruitment bonus and SBA did not have such a plan. One of the benefits of a recruitment bonus compared to an advanced salary is that the former is a one time expense and in the long term should be less expensive than a permanent advanced salary rate. During our review SBA revisited the issue of recruitment bonuses and developed a draft SOP covering recruitment bonuses, as well as salary-setting procedures, and documentation requirements. One of the 43 cases in which the advanced salaries were set on the basis of highest previous pay rates involved a former employee of SBA. In that instance, SBA officials--during our review--determined that the salary had been set incorrectly. This appointee had previously worked for SBA, resigned to accept a position with another federal organization, rejoined SBA, then resigned again. According to SBA officials, when the appointee rejoined SBA, HR staff apparently based her new SBA salary on her highest previous federal salary as reported in her application rather than on her highest previous salary as reported in her official salary transcript. The salary she reported in her application was $18,000 higher than the salary reported in the transcript. SBA determined that using her higher reported salary caused HR staff to set her SBA salary at a higher level than they would otherwise have done, resulting in an overpayment of approximately $6,000 for the affected period. In a memorandum to SBA’s Chief Financial Officer, the Assistant Administrator for Human Resources requested, and obtained, a waiver of the erroneous overpayment. According to SBA’s HR officials, they believed the salary-setting error was their fault and not the appointee’s fault. In addition to examining the procedures used in initial pay settings, we also examined the procedures that SBA used in later providing periodic salary increases to the 310 political and Ramspeck Act appointees. According to federal pay rules, GS pay grade employees are eligible for pay step increases based on their satisfactory work performance and their amount of time in the pay grade. These rules also permit step increases, regardless of time in grade, in recognition of quality performance. On the basis of the records we reviewed that pertained to salary increases, SBA appeared to have followed applicable federal laws and regulations in providing salary increases in 309 of the 310 cases. For the remaining case, the information available was not sufficient for us to make a determination. The periodic salary increases included within grade step increases due to either performance awards or time in grade. An interagency detail is the temporary assignment of an employee to a different agency for a specified period, with the employee returning to his or her regular duties at the end of the detail. In accordance with principles of appropriation law, the receiving agency should generally reimburse the loaning agency for the salary and expenses of the detailed employee. This rule was enunciated in a 1985 decision by the Comptroller General of the United States. He concluded that except under limited circumstances, nonreimbursable details (1) violate the law that appropriations must be spent only for the purposes for which they are appropriated and (2) unlawfully augment the appropriations of the receiving agency. Specifically, the Comptroller General held that nonreimbursable details are improper except where the detail (1) involves a matter related to the loaning agency’s appropriations or (2) will have a negligible impact on the loaning agency’s appropriations. An additional exception to the reimbursement requirement is provided by 3 U.S.C. 112, which permits nonreimbursable details of up to 180 days in a fiscal year to five Executive Office of the President (EOP) agencies—the White House Office, the Executive residence at the White House, the Office of the Vice President, the Office of Policy Development, and the Office of Administration. SBA records showed that 20 SBA employees were on interagency details at some time during fiscal years 1992 through 1998. Ten of the 20 employees were assigned to reimbursable details and 10 were assigned to nonreimbursable details. SBA had not taken action to obtain reimbursement for the services of 7 of the 10 employees assigned to reimbursable details until we brought the matter to its attention. Further, the actual lengths of the details were not adequately documented, which, for reimbursable details, makes it difficult to accurately determine the amount of reimbursement that is due. SBA officials acknowledged the need to improve the agency’s procedures for obtaining reimbursement and said steps would be taken to make improvements. For the 10 employees who were assigned to reimbursable details, the total potential reimbursable value amounted to more than $873,000. However, SBA records indicated that the agency had not sought reimbursement for the services of 7 of the 10 employees. According to SBA, officials from SBA’s finance office in Denver were responsible for seeking reimbursements from other agencies for details arranged on a reimbursable basis. Officials from that office told us that they overlooked their collection responsibilities in the seven cases because they were not aware that the employees had been detailed. They said they did not receive copies of the interagency agreements that authorized the details. An SBA official surmised that the interagency agreements, which are prepared by HR in Washington, D.C., with a copy going to the SBA finance office in Washington, D.C., were not being sent to the Denver office, which initiates the actual billing. After we brought these seven cases to the attention of the finance officials, the finance center began to make efforts to bill the appropriate agencies. Officials from the finance center have since told us that SBA has received reimbursement for the services of six of the seven employees and in its comments on a draft of this report, SBA said it has collected all but a small amount in the seventh case. The amounts of reimbursement SBA received were based on limited or incomplete information on each detail, did not always include amounts for employee benefits, and therefore were likely different from the amounts that should have been reimbursed. Although we found documentation indicating when details began, we noted that no systematic process existed to record the actual ending dates of the details. For example, in one case involving a nonreimbursable detail, SBA determined the actual duration of the detail on the basis of a handwritten notation in which an unknown source wrote the ending date of the detail in the bottom margin of a document evidencing the detail. Actual ending dates of details are important because although the interagency agreements contain the planned beginning and ending dates, the actual detail may be of a longer or shorter duration. During our review SBA officials said that a special project would be undertaken to examine how the reimbursable process should work, what appropriate paperwork requirements are needed, and what controls will be needed to ensure that interagency details are properly controlled in the future. In commenting on a draft of this report, SBA said that the not-to- exceed date is clearly stated in the documentation of the detail and that there is no requirement to document details that end on their not-to- exceed date. SBA said that it documents extensions of details and the termination date of details that end early, although it recognized that it failed to do so in one of the cases we reviewed. SBA said that it is revising its procedures to ensure program managers know extensions and terminations of interagency details are to be documented on a Notification of Personnel Action (SF-50) and that the new procedure will ensure that SBA’s personnel/payroll system provides automated “ticklers” to help eliminate the possibility of any funds from reimbursable details not being collected. SBA’s comments are unclear as to whether the new requirement to document terminations on a SF-50 will apply to all details or just to those that are extended or end early. If this requirement applies to all details, then we believe it should help serve as a reminder that reimbursement should be sought. However, if the requirement applies only to cases where details are extended or end early, then we believe the potential to overlook collections will continue to exist. SBA had interagency agreements or other documentation specifying the planned duration for each of its 10 employees assigned to nonreimbursable details. However, it was unable to determine the actual duration of the details for 4 of the 10 employees. In the other six cases, actual ending dates were supported by handwritten notations to the file in five cases, and by documentation provided by the borrowing agency in the sixth case. As in the case for the reimbursable details, SBA officials said they will determine the appropriate paperwork requirements and controls needed for nonreimbursable details as well. Six of the 10 employees were detailed to EOP offices included in Public Law 95-570, which permits nonreimbursable details of up to 180 days in a fiscal year. Five of the six were career employees who were detailed to the Office of the Vice President to participate in the National Performance Review.The sixth was a political employee who was detailed to the White House Office. This employee had been working at SBA for about 1 year at the time of the detail. SBA documentation did not indicate an expected length of the detail, although it did indicate the detail was to begin on April 16, 1995. After discussing this case with SBA officials, they determined that the employee resigned from the SBA position and transferred to the White House on July 29, 1995. Of the remaining four employees assigned to nonreimbursable details, SBA used the allowable exception in two cases that the employees would be performing duties related to SBA’s appropriations. In the other two cases SBA used the allowable exception that the details would have negligible impacts on SBA’s appropriations. Additional information on the interagency details is contained in appendix IV. During the 1996 through 1998 time frame, several personnel changes occurred in key SBA management offices in which both the number of positions and the number of political appointees increased. These changes are summarized in table 1. Table 1 reflects a net increase of four positions and three political appointees. It is not unusual for key personnel changes to occur when a new head of an agency is appointed. Generally, political appointees serve at the pleasure of the agency head; when the head of the agency changes, the political appointees often change as well. The incumbents of the eight positions that were eliminated either were transferred to other SBA positions or resigned. Those individuals who were appointed to the 12 new positions transferred into SBA from another federal agency, were reassigned from other SBA positions, or were newly hired. (See app. V for descriptions of these 20 position changes.) The Small Business Reauthorization Act of 1997, which became law on December 2, 1997, mandated that the position of Assistant Administrator for SBA’s Office of Women’s Business Ownership was to be upgraded to the SES level and held by a noncareer SES appointee. The legislative history of the Small Business Reauthorization Act of 1997 does not elaborate on the mandate to upgrade the Assistant Administrator position to the SES level. Neither the statute nor its legislative history set a time frame for upgrading the position. The position was filled with a noncareer SES appointee on April 5, 1999. During our review, we asked SBA officials to explain why the position had not been upgraded earlier, especially considering that during fiscal year 1998--when the act was passed--SBA had established and filled at least nine other SES positions, including two noncareer (political) positions. SBA officials we interviewed told us that the Assistant Administrator position was already filled with a Schedule C, GS-15 political appointee, and other critical need positions existed and had to be filled. According to an SBA document, the position of Assistant Administrator for SBA’s Office of Women’s Business Ownership had previously existed as an SES position; however, after the position became vacant in 1993, it was not refilled at the SES level. In 1994 the Clinton administration--as part of its early efforts to reduce the size of the federal government--directed that the total allocation of SES positions in the executive branch departments and agencies be reduced by 10 percent. According to SBA’s HR officials, in SBA’s case, the directive resulted in OPM reducing SBA’s total SES position allocation from 60 to 55, and the position of Assistant Administrator for Women’s Business Ownership was 1 of the positions that SBA downgraded as a result of the reduced SES allocation. According to SBA officials, during fiscal year 1998, SBA’s total SES allocation remained at 55, of which 10 could be filled by noncareer (political) SES appointments. SBA’s HR officials told us that in April 1998 SBA management officials began reevaluating the agency’s SES needs. This exercise resulted in a June 18, 1998, letter from the SBA Administrator to the Director, OPM, requesting an increase in SBA’s SES allocation from 55 to 60. The SBA Administrator made a case for establishing five new SES positions, including an upgraded position of Assistant Administrator for Women’s Business Ownership. By letter dated October 29, 1998, OPM’s Director responded to SBA’s request by increasing SBA’s allocation from 55 to 58. One of the three new allocations OPM provided was specifically for the position of Assistant Administrator for Women’s Business Ownership. However, at the end of December 1998, the appointment had not been made. According to SBA officials, all noncareer (political) SES appointments are controlled by the White House; and, although OPM had approved the position as part of the increased SES allocation, SBA officials were awaiting clearance by the White House before requesting OPM’s approval of the individual expected to receive the appointment. Clearance was received and OPM approved the individual’s appointment on April 5, 1999. The appointee was the same employee who had been been serving as the Assistant Administrator in the Schedule C, GS-15 position. During our review, we received information--and confirmed--that some Regional Advocates employed by the Office of Advocacy had attended a political appointee meeting that was sponsored by the White House during fiscal year 1997. Regional Advocates are not political appointees and, according to SBA, have a mission that is significantly different than that of political appointees. Nevertheless, Regional Advocates share many of the characteristics of political appointees. For example, like political appointees, Regional Advocates can be hired noncompetitively. Also like political appointees, Regional Advocates serve at the pleasure of the head of their agencies--in this case, the Chief Counsel for Advocacy. One additional factor that would further demonstrate the similarity of Regional Advocates and political appointees is that according to (1) the Chief Counsel for Advocacy, (2) a former Regional Advocate, and (3) a current Regional Advocate, new Regional Advocate appointees are frequently cleared though the White House personnel office before being appointed. The Chief Counsel for Advocacy said, however, that this is done more as a courtesy to the White House than because it is a requirement to obtain the White House’s concurrence on the appointment. The Office of Advocacy has 1 Regional Advocate stationed in each of the 10 cities in which SBA has a regional office. These 10 Regional Advocates report directly to the Chief Counsel and the Deputy Chief Counsel for Advocacy. The Regional Advocate’s role is, in part, to help identify issues affecting small businesses in their respective regions of the country. According to the Chief and Deputy Chief Counsels for Advocacy, the Office of Advocacy views itself as being independent of both the administration and Congress. The Deputy Chief Counsel told us that this is an important notion because to successfully mitigate issues affecting small businesses, the Office of Advocacy has to work effectively with administrations and Congress. On the other hand, she said that Regional Advocates, when they are in Washington, D.C., are encouraged to attend political appointee meetings at the White House or at other locations in order to gain and maintain a broad awareness of the administration’s initiatives in many areas, including small business. Several times a year the Regional Advocates, individually and collectively, visit Office of Advocacy headquarters in Washington, D.C., to discuss small business concerns. While in Washington, according to the Deputy Chief Counsel for Advocacy, they may attend political appointee meetings. This official said that such meetings are often sponsored by SBA officials, and attendees include only SBA political appointees and other key SBA officials. But, on occasion, wider scope political appointee meetings are sponsored by the White House and are attended by political appointees from numerous executive branch departments and agencies, including SBA. According to the Deputy Chief Counsel for Advocacy, although Regional Advocates are encouraged by the Chief Counsel for Advocacy to attend such meetings, their attendance is nevertheless voluntary. By examining copies of Regional Advocates’ travel authorizations and travel vouchers for fiscal year 1997 and for part of fiscal year 1998, we identified eight Regional Advocates who had visited the White House on the same date. We confirmed their visit with the White House Office. We were able to contact six of these Advocates, some of whom no longer work for SBA. They each confirmed that on that date they attended a political appointee meeting at the White House. The meeting, they said, was for executive branch political appointees who were stationed outside of Washington, D.C. According to these six Regional Advocates, the White House meeting was primarily an opportunity to receive briefings from high-level administration officials--including the President and the Vice President--on the administration’s initiatives. One former Regional Advocate told us that this meeting also served as a morale booster to the political appointees who attended. The six current or former Regional Advocates we contacted told us that they had not received instructions of any kind from officials who attended the White House meeting and did not believe their independence in carrying out their duties was affected by attending the meeting. Each of the six Advocates pointed out, however, that they had ties to either the Clinton administration or to the Democratic party before they received their Regional Advocate appointments. For example, several of them told us that they had worked on either the Clinton-Gore 1992 presidential campaign or the 1996 reelection campaign. Another told us she had worked with the Democratic party in Colorado during 1996. Significant personnel changes have occurred at SBA since the early 1990s, some of which resulted from the considerable reorganization and downsizing that SBA experienced. Regional offices were reorganized and drastically downsized, with most regional employees transferring to other SBA offices or retiring. During the period, SBA hired outsiders for 6 of the 46 District Director positions it filled and did so following applicable federal hiring procedures. With the exception of one case, SBA also followed applicable federal laws and regulations when setting the starting salaries. The Office of Advocacy used informal processes for identifying and selecting persons to fill Regional and Assistant Advocate positions, which was in keeping with its special hiring authority. In other SBA executive offices, the number of positions and political appointees increased somewhat from 2 years earlier. There was also one position change--upgrading (and filling) the position of head of the Office of Women’s Business Ownership to the SES level--that Congress said should occur but SBA had not done until over a year later. In fiscal year 1997, at least six Regional Advocates attended a political appointee meeting at the White Office, which was in keeping with an Office of Advocacy’s policy of encouraging attendance at political meetings. Over the greater part of the 1990s, SBA hired 310 political and Ramspeck Act employees, and, as far as we can tell, SBA HR staff usually followed applicable laws and regulations in setting their salaries and later in providing them periodic salary increases. However, the documentation necessary to support certain pay settings could not always be found. According to SBA, it had not considered providing recruitment bonuses on a case by case basis because it had made a policy decision not to offer such bonuses. This reason, however, was not documented as required for those cases where employees received advanced salary levels. SBA has recognized that its HR staff need further guidance on pay setting and recruitment bonuses and has drafted a new set of procedures to provide that guidance. SBA participates in interagency details of employees but did not have adequate procedures to (1) accurately identify when it should bill agencies for the reimbursable details of SBA employees and (2) monitor the actual length of details. SBA officials agree that an improved system of internal controls is necessary and have taken steps in that direction. We recommend that the SBA Administrator finalize and issue standard operating procedures that include procedures for considering recruitment bonuses, setting salaries, and documenting those actions when SBA establishes starting salaries for newly appointed employees at levels above the minimum step of a pay grade. We recommend that the SBA Administrator identify and establish appropriate procedures for better controlling the interagency detailing of its employees. Such procedures should ensure that the specifics of each detail are appropriately documented and monitored and that in the case of cost-reimbursable details, all costs are accounted for and promptly reimbursed. By letter dated April 1, 1999, the SBA Administrator’s designee--the Assistant Administrator for Human Resources--commented on a draft of this report. Although the Assistant Administrator did not address our two recommendations directly, she noted actions that SBA had taken or was taking that were associated with those recommendations. In connection with the recommendation that SBA finalize and issue standard operating procedures for considering recruitment bonuses, setting salaries, and documenting starting salaries that are above the minimum step of a grade, the Assistant Administrator said SBA had finalized a policy document establishing procedures for, and requiring consideration of, recruitment bonuses, which it expected to publish during this spring (1999). We believe this new policy as well as the guidance SBA is developing on pay setting should, if effectively implemented, meet the intent of our recommendation. In connection with our recommendation that SBA identify and establish appropriate procedures for better controlling interagency details, the Assistant Administrator said SBA was revising its detail procedures to ensure program managers know that extensions and terminations of interagency details must now be documented by a formal Notification of Personnel Action (SF-50). This new procedure, she said, will insure that SBA’s personnel/payroll system provides automated ticklers and the combination of these actions should eliminate the possibility of any funds from reimbursable details not being collected. It is unclear from SBA’s comments whether this new procedure is to apply to all details or to only those details that are extended or terminate early. If the former, then we believe such a procedure, if effectively implemented, should meet the intent of our recommendation. If the latter, then we believe the potential will still exist for overlooking collections. Although the Assistant Administrator said SBA was taking these actions, she also said SBA found the draft “misleading.” Most of her letter addresses that statement, conveying SBA’s problems with the draft or providing additional information. Apparently, SBA officials considered certain conclusions that we drew from the facts to be overstated and certain facts that we presented to be incorrect. For example, the Assistant Administrator cited our conclusion that SBA “usually” followed appropriate laws and regulations in setting the salaries and later in providing salary increases for 310 political and Ramspeck Act employees. The Assistant Administrator indicated that the term “usually” was misleading because elsewhere in the report we said SBA appeared to have followed applicable laws and regulations in providing salary increases in 309 of the 310 cases. Information was not available to make a determination in the remaining case. Regarding this example, we recognize that for 309 of the 310 appointments we conclude that SBA appeared to have appropriately provided salary increases. However, we also report a number of cases in which SBA could not provide documentation allowing a determination as to whether SBA followed appropriate procedures when setting advanced salaries at the time the employees were hired. Since receiving our draft report for comment, SBA sent further information to us on specific cases. The cases were of employees for whom SBA had been unable to provide documentation to support an advanced pay setting. In the draft report, these cases numbered 31 of the political and Ramspeck Act appointments that we reviewed in which the starting salaries were set above the minimum step of the grade. SBA has since found and provided us with the information justifying the advanced pay setting for several of these cases and we modified the report as appropriate. We now report that justifying documentation was not available for 12 cases. Because our overall conclusion encompasses both the initial salary setting and the periodic salary increases, we believe our characterization that SBA usually followed applicable laws and regulations is both fair and accurate. Regarding recruitment bonuses, the Assistant Administrator said that SBA had at one time considered recruitment bonuses, but had decided as a matter of policy not to offer any. Consequently, SBA had not developed a recruitment bonus plan, as required by regulations. Without such a plan, the Assistant Administrator said recruitment bonuses could not be offered. As indicated earlier, SBA has since decided to initiate a recruitment bonus policy. SBA’s policy decision and rationale for not using recruitment bonuses were never mentioned by SBA officials during the course of our work. Had the policy decision and rationale been mentioned, we would have included those facts in the draft report. On the basis of SBA’s comments, we have reflected that information in this report. The important point, in our opinion, is that recruitment bonuses can be an effective and cost efficient alternative to advanced pay setting, and we commend SBA for revisiting the issue at our suggestion. SBA’s written comments and our further responses to them are in appendix VI. As agreed with the Committee, unless you publicly announce the report’s contents earlier, we plan no further distribution of it until 30 days after the date of this letter. We will then send copies to Senator Barbara Milkulsi and to Representative James M. Talent, Representative Nydia M. Velazquez, Representative James T. Walsh, and Representative Alan B. Mollohan in their capacities as Chair or Ranking Minority Member of Senate and House Committees and Subcommittees. We will also then send copies to the Honorable Aida Alvarez, Administrator of SBA and to the Honorable Janice R. Lachance, Director of OPM. Also, at that time, we will make copies available to others on request. Major contributors to this report are listed in app. VII. Please contact me at (202) 512-8676 if you or your staff have any questions. Our first objective was to determine the status of Small Business Administration (SBA) regional office employees following SBA’s reorganization and whether SBA employees were shifted to regional offices after those offices were downsized. In addressing the first part of this objective--the status of regional office employees--we identified all employees assigned to an SBA regional, district, or branch office as of September 2, 1993, (which was prior to the reorganizing and downsizing efforts), and as of April 1997 (which was after the regional offices had been downsized.) We obtained this data from SBA’s Office of Human Resources (HR) database. We then entered each employee’s name, and the office to which he or she was assigned as of those dates, into a database that we developed specifically for this purpose. Using our database, we sorted the information alphabetically by employee name and manually searched for name matches. In cases where (1) there was a name match; and (2) that employee had been assigned to a regional office as of September 2, 1993, we were able to determine that employee’s employment status as of April 1997. In cases where (1) there was no name match; and (2) that employee had been assigned to a regional office as of September 2, 1993, we took additional steps to determine the employee’s status. In those cases, we provided each name to SBA’s HR officials and asked that information on the status of each be determined. HR officials provided us with documents from their database on each of those employees for whom a record existed. The documents showed that the employees resigned, retired, died, or were otherwise separated from the agency during the period September 2, 1993, through April 1997. In some cases employees’ records were no longer in the database, and their status could not be determined. In addressing the second part of this objective--the shifting of personnel-- we determined that SBA officials do not usually document the temporary shifting of employees from one office to another. So, in order to obtain information on such shifting, we interviewed cognizant HR officials. These officials, on our behalf, also canvassed appropriate regional and district office staff for specific instances in which personnel were shifted to the regional offices and provided that information to us. In some cases we discussed specific instances of personnel shifts directly with district office officials. Our second objective was to determine whether SBA followed applicable policies and procedures when appointing--and setting the starting salaries of--individuals hired during the period January 1, 1993, through December 31, 1998, from outside of SBA for the position of District Director, and the procedures SBA’s Office of Advocacy used in hiring Regional Advocates and Assistant Advocates during calendar year 1998. In addressing the first part of this objective--the procedures used to hire District Directors from outside SBA and to set their salaries--we asked HR officials to identify all District Director changes that occurred between January 1, 1993, and December 31, 1998. We then focused upon six cases in which the District Director positions were filled by outside hires. We identified the starting salaries set by SBA in each of the six cases and, using previous employment and salary information contained in the individual’s application materials, referenced that information to the applicable federal laws and regulations regarding salary setting to determine if SBA complied with the laws and regulations. We also determined where the other individuals who were appointed to District Director positions since January 1, 1993, had come from; and we focused upon two cases involving individuals who had held non-District Director SBA positions before being appointed as District Directors. For each of the eight cases, we reviewed available information, including available competitive examination process case files. In some cases the appointments were made over 2 years before our review began, and SBA did not have a complete case file on the examination and appointment process. We examined available records and compared the examination and appointment procedures used to the competitive examination and appointment requirements contained in the Code of Federal Regulations (C.F.R.) and contained in SBA’s merit staffing plan. In addressing the second part of the objective--the procedures used by SBA’s Office of Advocacy to hire Regional Advocates and Assistant Advocates during 1998--we first obtained a copy of, and reviewed, the federal statute that provides unique hiring authority to the Chief Counsel for Advocacy. We then interviewed the Deputy Chief Counsel for Advocacy and obtained information on nine employees hired during calendar year 1998. We examined information contained in each of their official personnel folders (OPFs), including the appointment authorities that were cited and the period of their appointments. Because the Chief Counsel for Advocacy has special hiring authority that is exempt from competitive examination procedures, case files and other documentation that is required under competitive examination procedures did not exist in these cases. As a result, we had to rely principally upon interviews with the Chief and Deputy Chief Counsels for Advocacy in determining the recruiting and hiring processes that were used in these cases. Our third objective was to determine if SBA followed applicable federal laws and regulations to set the salaries of, and provide salary increases to, political appointees and former congressional (Ramspeck Act) employees hired between October 1, 1991, and September 30, 1998. We first identified those employees by obtaining an accession list from HR officials of political appointees and former congressional employees hired under the Ramspeck Act since October 1, 1991. We then obtained the OPF of each of those employees and reviewed standard forms 50B (SF-50s) that were filed in the OPFs. SF-50s contained salary setting and salary increase information on each employee and are supposed to be a permanent part of an employee’s OPF. We examined the salaries that SBA set for those employees and the salary increases SBA provided to them for compliance with the regulations governing salary setting and salary increases contained in the C.F.R. and in SBA’s policies. Because OPFs are to be maintained by the agencies for which the employees work, we obtained OPFs from (1) SBA’s HR officials in cases in which the employee was still working for SBA at the time of our review, (2) HR officials of other agencies in cases where the employees had transferred from SBA to another agency, and (3) officials of the National Archives and Records Administration’s National Personnel Records Center in cases in which the employees were no longer working for the government and their OPFs had been archived. In some cases, we were unable to locate and obtain OPFs of former SBA employees. In those cases we relied instead on personnel records contained in SBA’s personnel database. Our fourth objective was to determine whether SBA adequately controlled the interagency detailing of its employees during fiscal years 1992 through 1998. We obtained information from SBA’s HR officials that identified the SBA employees who were detailed to other agencies and the agencies to which they were detailed during that time period. (SBA officials also provided us with information on other agency employees who were detailed to SBA.) We searched federal regulations, Comptroller General decisions, and the Federal Personnel Manual (now no longer in use) to ascertain what governmentwide guidance was available to SBA on the matter of interagency details. We also obtained from SBA’s HR officials agency personnel guidance regarding the detailing of employees and the interagency agreements that were used to effect and set the terms of the details. We examined the OPFs of those SBA employees who were detailed and searched for relevant documentation supporting the details. Using all available documented information, as well as information obtained from interviews of officials from SBA’s Office of Human Resources, we compared the circumstances of each detail to the criteria for details included within SBA’s personnel guidance. Using available documented information, as well as information obtained from interviews with officials from SBA’s Office of the Chief Financial Officer, we examined (1) the status of recovering costs associated with the reimbursable interagency details and (2) the length of time each detail was to last and actually lasted. Our fifth objective was to determine what positions were newly created or abolished between March 1996 and March 1998 in SBA’s Office of the Administrator, Office of the Deputy Administrator, Office of the Chief of Staff, Office of the Chief Operating Officer, and Office of the Associate Administrator for Field Operations and what were the sources of appointees to the newly created positions and the status of those employees in the abolished positions. We first determined the staffing changes that had occurred in each of those offices by comparing the organizational section of SBA’s telephone books for 1996 and 1998. Those organizational sections identified specific employees of each of those offices by name and title. For those positions that were newly established since 1996, we obtained position descriptions from SBA’s HR officials. We then interviewed SBA’s HR and Chief of Staff officials to discuss establishing and making appointments to those new positions. We also discussed with them the circumstances related to positions being abolished. We examined information contained in the official personnel folders or examined relevant personnel information otherwise available and maintained by SBA’s Office of Human Resources to determine the status of those employees whose positions were abolished. Our sixth objective was to determine the status of SBA’s response to a December 1997 congressional mandate to establish a Senior Executive Service (SES) position within SBA’s Office of Women’s Business Ownership. We first researched the law mandating that the position be established. We then obtained information from SBA’s HR officials on all executive level positions that had been established and filled during the fiscal year the legislative mandate became effective. We interviewed cognizant SBA officials and obtained their explanation, as well as relevant documents, as to the status of the establishment and filling of the mandated position. We compared this information to information we obtained from SBA on how it established and filled other Senior Executive Service positions during the period after the congressional mandate but prior to compliance with the mandate. Finally, our seventh objective was to determine whether SBA Regional Advocates attended a White House-sponsored political appointee meeting during fiscal year 1997. Our first step in determining which Regional Advocates attended the White House-sponsored meeting was to obtain their travel vouchers and review them for evidence that they traveled to the White House. We then contacted White House officials and confirmed that the Regional Advocates had visited the White House on the date the meeting occurred. We prepared a summary schedule of each instance we found and then contacted six former or current Regional Advocates by telephone to discuss the circumstances of their visits to the White House. We also interviewed the Chief and Deputy Chief Counsels for Advocacy and obtained their perspectives on why the Regional Advocates may have attended the White House-sponsored meeting. In addressing our seven objectives, we relied extensively upon personnel records provided to us by SBA’s HR officials. Some of the information contained in those records dated back to before October 1991. A significant portion of the records came from SBA’s personnel records database. This database is contained on computers maintained by the U.S. Department of Agriculture’s National Finance Center (NFC) in New Orleans, LA. NFC provides computer-based payroll and personnel services to many federal agencies, and the system is periodically examined for reliability by government and outside auditors. We did our work in Washington, D.C.; Denver, CO; and St. Louis, MO; from March 1997 through January 1999 in accordance with generally accepted government auditing standards. We obtained written comments on a draft of this report from SBA. These comments are discussed on page 37 of this letter and are reprinted in appendix VI. On the basis of personnel-related data as of September 2, 1993, and as of April 1997, and provided to us by SBA Office of Human Resources (HR) officials, we determined that the total number of employees assigned to SBA’s 10 regional offices was reduced from 504 to 29. We examined additional individual records that were available and determined the status--following SBA’s reorganizing and downsizing efforts--of all but 25 of the 504 employees. In the 25 cases, individual employee records were not readily available, so the status of those employees could not be determined. Table II.1 below shows the status by the regional offices to which the 504 employees were assigned. SBA had 69 district offices in 1998 located throughout the United States and in Puerto Rico. During the period January 1, 1993, through December 31, 1998, SBA appointed 46 District Directors to 39 district offices.1 In 6 of the 46 cases, the District Directors were newly hired and appointed to their positions from outside SBA after participating in a competitive examination process. In the other 40 cases, the individuals appointed were employees of SBA. This appendix provides case information on the six cases where the District Directors came from outside SBA. It also provides information about the appointments of SBA employees to District Director positions. During calendar years 1994, 1996, 1997, and 1998, SBA filled six vacant District Director positions with outside hires under competitive examination processes. In each of the six cases, to the extent that relevant documentation was available to review, we examined the steps SBA followed in the appointment process and determined that SBA appeared to have followed all relevant federal laws and regulations. In each case the individual selected was listed among the best-qualified candidates. In one case, we received allegations that the appointee benefited from favoritism—through use of political connections—in obtaining her career appointment as a District Director. We identified information that indicated that the appointee had worked for the Office of the Governor of Colorado. The Governor of Colorado also served as the Chairman of the Democratic National Committee at the time the appointee obtained her District Director position in Denver, Colorado. Despite obvious political connections to the Democratic party, we did not identify factors that we believe would prove favoritism was involved in the appointment process in this case. Determining whether or not favoritism was used in such a case is extremely difficult and can be done only if the intent and motivation of the selecting officials were known. Each of the six cases we reviewed is presented separately below. Case 1: On May 26, 1994, a former SBA employee was appointed to the position of District Director of the Washington, D.C., District Office. Most District Director positions in SBA are at the GS-14 or GS-15 grade level. However, this position was advertised in the vacancy announcement as an SES career position and was open to all qualified federal employees. The number of appointments exceeded the number of offices because some individuals received multiple appointments during the time period; e.g., they were reassigned from one District Director position to another. During the years 1985 through 1988 the appointee served in a noncareer SES capacity at SBA as Associate Administrator for Minority Small Business. Since then, he had held other high-level positions at the Departments of Commerce and State as well as in the private sector. Information contained in his application materials showed that he had received a Bachelor of Arts degree in sociology from the University of Puerto Rico. Also, information contained in his application materials showed that his salary had progressed from about $23,000 per year in 1976 when employed by the former U.S. Department of Health, Education, and Welfare, to $68,000 per year in 1985 when initially employed by SBA; to $90,000 per year in 1994 when employed by the Department of Commerce; and finally, to about $112,000 when appointed by SBA to the SES District Director of Washington, D.C. position.2 Relevant documentation regarding the other applicants and the competitive examination process used to fill this District Director position was unavailable. According to SBA’s HR officials, there was no requirement to retain such information beyond 2 years and it is possible that the information was destroyed before we began our review.3 Case 2: On June 19, 1994, an individual experienced in banking and mortgage lending was appointed to the position of District Director of the Cleveland, OH, District Office. Information in his resume showed that he received a Bachelor of Science degree in International Affairs from Georgetown University and a Master of Business Administration degree from the University of Notre Dame. Also, information contained in his resume showed that his salary had progressed from about $38,000 per year in 1978 as a Vice President of a bank, to about $75,000 per year in 1991 as a Vice President of a different bank. This individual’s salary decreased to about $68,000 when he was appointed by SBA to the GS-15, step 1, District Director of Cleveland, OH, position. SES appointments are not subject to the rules and regulations of title 5 of the U.S. Code that pertain to competitive service appointments, and heads of agencies have greater discretion in setting the salaries of SES appointees. According to 5 C.F.R. 335.103(b)(5) such records may be destroyed after 2 years or after the agency’s merit promotion program has been formally evaluated by the U.S. Office of Personnel Management (OPM). position open to all qualified applicants in the United States. Information obtained from SBA showed that before the position was advertised, an SBA employee who had graduated from SBA’s District Director Candidate Development Program had expressed interest in being appointed to the position. According to SBA’s HR officials, successful graduates of SBA’s District Director Candidate Development Program can be noncompetitively appointed to District Director positions. HR officials told us they conveyed to the SBA Administrator the employee’s interest in being appointed to the position. However, according to the HR officials, without explanation the SBA Administrator directed that the position be advertised to the public. Information obtained from the Office of Human Resources showed at least 37 individuals applied for the position. After evaluating the qualifications of the 37 applicants, SBA’s Office of Human Resources established two certificates of numerically rank-ordered applicants from which a selection could be made. One certificate contained the names of five applicants eligible for appointment at the GS-15 grade level. The other certificate contained the names of five applicants eligible for appointment at the GS- 14 grade level. According to federal personnel regulations, an individual selected from a rank-ordered certificate should be among the top three highest-ranked applicants on the certificate, and a preference-eligible veteran should not be bypassed. In this case, the appointee was selected from the GS-15 certificate. He was tied with another applicant in having the highest numerical ranking of those listed on that certificate, and both of them were awarded 5 extra points because they provided evidence of being preference-eligible veterans. The appointee was ranked second highest on the GS-14 certificate. The highest ranking applicant on that certificate claimed to be a preference-eligible veteran and was awarded 10 extra points because he claimed to be at least 10-percent disabled. Information regarding the SBA employee who had graduated from the District Director Candidate Development Program and who had indicated an interest in being appointed to the position showed that he was serving as a GS-13 Financial Analyst in SBA. The information also showed that he had received a Bachelor degree in Business Administration from Howard University. Documentation on the other applicants who applied under the vacancy announcement was not available. According to officials from the Office of Human Resources, given that the case was over 3 years old at the time of our review, most of the documentation regarding those applicants was likely destroyed. Case 3: On May 29, 1996, an individual who had previously served as an SBA Regional Administrator, and who had since worked as an executive in banking and communications and in the Government of Puerto Rico, was appointed to the position of District Director of the Hato Rey, PR, District Office. According to information contained in his application materials, he received a Bachelor of Arts degree in government and economics from the University of Puerto Rico and had met the credit requirements for a Master of Public Administration degree at New York University. Also, information contained in his application materials showed that his salary had progressed from about $16,000 per year in 1970 when employed as an Assistant Vice President of a bank; to about $50,000 in 1977 when employed as a GS-17 SBA Regional Administrator for New York; to about $86,000 in 1996 as the Deputy Administrator for Economic Development Administration, Government of Puerto Rico; and finally to about $90,000 when appointed by SBA to the GS-15, step 10, District Director of Hato Rey, PR, position.4 This individual was selected for the District Director position through a competitive examination process that began with a vacancy announcement advertising the job as a GS-14/15 position open to all qualified applicants. Information obtained from SBA’s Office of Human Resources showed that at least 27 people applied for the position. After evaluating the qualifications of the 27 applicants, SBA’s Office of Human Resources established two certificates of numerically rank-ordered applicants from which a selection could be made. One certificate contained the names of six applicants eligible for appointment at the GS-15 grade level. The other certificate contained the names of 12 applicants eligible for appointment at the GS-14 grade level. The appointee was selected from the GS-15 certificate on which he was the highest-ranking applicant. He provided evidence of being a preference-eligible veteran and was awarded 5 extra points. He was also the highest-ranking applicant on the GS-14 certificate. Application materials from other applicants for this position were available for our review. We reviewed the employment histories and education information contained in those application materials. The applicant who was ranked second highest on the GS-15 certificate had served as Secretary of the Treasury of the Government of Puerto Rico in the 1980s. The SES, established in 1978, incorporated the GS pay grades of GS-16, 17, and 18. Therefore, this individual’s pay grade during the period when he was Regional Administrator of SBA’s New York Office was equivalent to today’s SES pay grades. Also, she had served in executive positions with a major bank, a State of New York finance agency, and in a private sector securities corporation. This applicant received a Bachelor of Business Administration degree from the University of Puerto Rico and a Master of Business Administration degree from the Wharton School of Business. She would have been the highest-ranked applicant on the GS-15 certificate had the appointee not benefited from 5 extra points awarded under provisions of veteran preference laws. The third highest-ranked applicant on the GS-15 certificate was employed by SBA as a GS-14 Supervisory Attorney in a District Office. He too had received a Bachelor’s degree from the University of Puerto Rico. In addition, he claimed he had met all credit requirements for a Master of Public Administration degree from that same University and a Juris Doctorate degree from the Inter-American University. Case 4: On February 3, 1997, an individual who had previously worked in the State of Colorado Governor’s Office was appointed to the position of District Director of the Denver, CO, District Office. According to her application materials, at the time she applied for the District Director position, she was serving as the Director for Citizen Advocacy and Outreach and as the Colorado State Diversity Coordinator for the Government of Colorado State. Before that, she was in business for herself as a human resources consultant and had worked for OPM as a GS-11 Personnel Management Specialist. Her application materials indicated that she had earned college credits in business administration courses from the University of Albuquerque and from the University of New Mexico. However, there was no indication that she had earned a college degree. None of the vacancy announcements for District Director positions we reviewed stated that a college degree was required. Also, her application materials showed her salary had progressed from about $14,000 per year in 1974 when she was employed as a GS-9 Administrative Assistant working at the Caribou National Forest; to about $31,000 per year in 1984 when employed as a Personnel Management Specialist by OPM; to $100,000 per year in 1990 as a self-employed human resources consultant. This individual’s salary decreased to about $77,000 when she was appointed by SBA to the GS-14, step 7, District Director of Denver, CO, position. She was selected for the District Director position through a competitive examination process that began with a vacancy announcement advertising the job as a GS-14/15 position open to all recruiting sources. At least 45 other applicants had applied for the position. After reviewing the application materials of all of the applicants, SBA’s HR officials prepared several certificates and rosters of eligibles from which a selection could be made.5 One certificate contained the names of eight applicants, in numerically ranked order, eligible for appointment at the GS-15 level. Another certificate contained the names of 18 applicants, in numerically ranked order, eligible for appointment at the GS-14 level. The appointee was selected from a separate roster of applicants eligible for appointment at the GS-14 level under SBA’s Merit Promotion and Placement Program. This was an alphabetical listing of 11 qualified applicants who already had competitive service status in the government. The appointee had acquired competitive service status from her previous employment with the federal government and was therefore eligible to be reinstated into the competitive service. According to 5 C.F.R. 335.103(b)(4), agencies may consider applicants eligible for reinstatement into the competitive service as part of the agency’s merit promotion program. We became aware of allegations that the appointee obtained this District Director position through her political connections. We noted that the appointee’s application materials showed that at the time she applied for the position, she was working for the Office of the Governor of Colorado. The Governor was also serving at the time as the Chairman of the Democratic National Committee. Through this position, he had dealings with other Democratic party leaders, including President Clinton. We also noted that a key official in the selection process was the Regional Administrator of SBA’s Denver Regional Office. The person selected as the District Director of the Denver, CO, District Office would be reporting to the Regional Administrator. The Denver Regional Administrator, as well as the other nine SBA Regional Administrators, were political appointees. However, despite these facts, and the appearance they give that favoritism could have been involved in the selection process, we did not identify other factors that would conclusively demonstrate that favoritism was in fact used in the appointment process in this case. According to a memorandum from the Chairman of SBA’s Executive Resources Board (ERB) to the SBA Administrator, the ERB Chairman and the Regional Administrator of SBA’s Denver Regional Office, the Assistant Administrator of SBA’s Office of Human Resources, and three other SBA officials had interviewed the appointee and two other candidates. A certificate of eligibles is used when the applicants do not have competitive service status and their qualifications must therefore be examined and rated numerically against other nonstatus applicants. The selecting official is required to select from the top three rated applicants on the certificate and may not pass over a preference-eligible for a nonpreference-eligible without sufficient justification. A roster of eligibles is prepared for applicants who do have competitive service status and whose qualifications meet the minimum requirements for the position. These applicants are listed in alphabetical order, and the selecting official may select anyone on the alphabetical list. According to the memo, they determined that the appointee was clearly the top choice of each of the interviewers. They said that the appointee had demonstrated the requisite ability, energy, and enthusiasm needed for the position. Employment histories and education information contained in the application materials of the other two candidates who were interviewed showed that one was a GS-14 Supervisory Contract Specialist at another federal government agency. This applicant received a Bachelor of Science degree in Human Resource Management from the University of Wyoming and a Master of Business Administration degree from California State University. The other applicant who was interviewed was the Acting District Director of the Denver District Office. He had obtained a temporary promotion to the GS-15 level as the Acting District Director. His application materials indicated he had taken college courses in accounting, but there was no indication that he had earned a college degree. Information contained in the application materials of the two highest- ranking applicants on the GS-15 certificate showed that the highest-ranked applicant had been the Deputy Director for Economic Development for the State of Ohio. He received a Bachelor of Science degree from Tennessee State University in 1969 and a Master of Administration degree from Central Michigan University in 1979. The second highest-ranked applicant on this certificate had been a Credit Specialist with the Federal Deposit Insurance Corporation. Before that he was an executive level Subsidiary Program Manager at the Resolution Trust Corporation and had served as a Vice President of two different banks. He received a Bachelor of Science degree from California State Polytechnic University in 1968 and a Master of Business Administration degree from that same university in 1972. Information contained in the application materials of the two highest- ranking applicants on the GS-14 certificate showed that the highest-ranked applicant had been a GS-13 (equivalent) Supervisory Management Analyst with the Department of the Army. He received a Bachelor of Arts degree in sociology from the University of the State of New York in 1976. He also received a Master of Business Administration degree from Pepperdine University in 1987. The second highest-ranked applicant on this certificate had been an executive level Director of Asset Management at the Resolution Trust Corporation. Prior to that he had served as General Counsel and Senior Vice President of a savings and loan association. He received a Bachelor of Science degree in public administration from Florida State University in 1957, and a Juris Doctorate degree from Florida State University in 1973. Case 5: On August 11, 1998, an individual who had previously been an entrepreneur in the home health services field was appointed to the position of District Director of the New Orleans, LA, District Office. According to information contained in his application materials, as an entrepreneur the individual had participated in many of the SBA’s programs. The individual’s application materials cited the names of U.S. and local politicians who could be contacted as references for the individual and contained letters of recommendation from local politicians. According to information contained in his application materials, the individual obtained a Master of Business Administration degree from the University of New Orleans and a Bachelor of Science degree from Xavier University of New Orleans. The application materials also showed that his salary had progressed from about $14,000 per year in 1974 when employed as a Vice President of an oil fields drilling fluids and service company; to $65,000 per year in 1992 as the Chief Executive Officer (CEO) of a precision machine shop; to $75,000 in 1997 as President and CEO of his own home health care company; and finally to about $76,000 when appointed by SBA to the GS-15, step 1, District Director of New Orleans, LA, position. This individual was selected for the District Director position through a competitive examination process that began with a vacancy announcement advertising the job as a GS-14/15 position open to all recruiting sources. According to a SBA HR official who initially reviewed all application packages for this position, between 60 and 70 people applied for the position. Information obtained from SBA’s Office of Human Resources showed that 5 selection certificates and rosters were prepared that contained the names of 21 applicants found to be best qualified for the position. Two certificates contained the names of best-qualified applicants in numerically ranked order, and three rosters contained the names of best-qualified applicants in alphabetical order. Of the two numerically ranked certificates, one contained the names of three individuals, including the appointee, who were eligible for appointment at the GS-14 level. The appointee ranked third on this certificate. The other numerically ranked certificate contained the names of three individuals, including the appointee, who were eligible for appointment at the GS-15 level; the appointee was ranked second on this certificate and was appointed from it. However, the original certificate did not contain the appointee’s name, and the certificate was amended to include it. According to the SBA HR official who handled the case, he was initially unaware that the appointee’s application package contained both an abbreviated resume and a more substantive resume. The HR official told us that he examined the abbreviated resume and determined the appointee was eligible for appointment only at the GS-14 level. According to the HR official, when the appointee learned that he was selected for appointment at the GS-14 level, the appointee appealed for reconsideration for appointment at the GS-15 level. The official told us that during a discussion of the matter with the appointee, the appointee referred to the substantive resume that he had submitted as part of his application materials. The HR official told us that he reexamined the appointee’s application materials and found that the more substantive resume had been overlooked. On the basis of the HR official’s reexamination of the application materials—including the more substantive resume—and on the basis of an independent examination of the same application materials by a second HR official, a determination was made that the appointee did qualify for appointment at the GS-15 level. As a result, an amended certificate was prepared to include the appointee’s name. The SBA HR official who conducted the initial examination and the reexamination told us that such mistakes sometimes occur because of the overwhelming number of applicants and the volume of materials each applicant submits for District Director positions. He also said that new procedures have recently been put into place that require a second HR official to also examine all application packages for the District Director positions. He believes the use of two independent examiners should help prevent the reoccurrence of such mistakes. Of the three rosters from which a selection could have been made, one contained the names of two individuals—in alphabetical order—who had indicated interest in being transferred to the position at the GS-15 level. A second roster contained the names of two individuals—in alphabetical order—who were found eligible for promotion to the position at the GS-15 level. And the third roster contained the names of 11 individuals—in alphabetical order—who were found eligible for promotion to the position at the GS-14 level. We noted that the appointee’s application materials contained the names of two U.S. Senators and two U.S. Representatives, all from Louisiana, who could be contacted as references for the applicant. Also cited as references were the names of the mayors of the cities of New Orleans and Alexandria, LA. The application materials also included letters of recommendation from the mayors of the cities of Slidell and Opelousas, LA, as well as from several councilmen of the city of New Orleans. On the basis of these cited references and letters of recommendation, we questioned SBA’s HR officials about any political connections that may have been used in the competitive examination process. SBA HR officials from both headquarters and from the SBA Denver, CO, personnel processing center told us that other than the information that was contained in the application package, they were not aware of any political connections the appointee may have had within SBA or elsewhere within the administration. They also claimed that they did not contact the cited references and that there was absolutely no pressure of any kind placed upon them as they conducted the competitive examination process in this case. Other than the information contained in the application materials, we did not identify any other information that would indicate the use of political connections in this case. We did inquire about whether or not the appointee remained involved with any prior business that may be participating in SBA programs. SBA’s HR officials told us that the appointee was completely out of all prior businesses at the time that he was appointed to the District Director position. Our review of the application packages of the other best-qualified applicants is summarized below. Of the two other individuals who were listed on the numerically ranked GS-15 certificate from which the appointee was selected, both were SBA employees. One had been serving as the Acting District Director of the New Orleans District Office since December 17, 1997; and the other was serving as the District Director of SBA’s Detroit, MI, District Office. The Acting District Director was formerly the Assistant District Director for Economic Development, Finance, and Investment. According to his application materials, he obtained a Bachelor of Science degree in Personnel Management in 1961 from Louisianna State University. According to the other individual’s application materials, he had been serving as the District Director of the Detroit, MI, District Office since June 1995 and obtained a Bachelor of Science degree in Mathematics from Howard University in 1968. Of the other two individuals who were listed on the numerically ranked GS-14 certificate, one was a GS-13 Supervisory Business Marketing Executive with the Defense Reutilization and Marketing Service. According to his application materials, he obtained a Bachelor of Arts degree in Business Administration from the University of Maryland in 1975 and a Master of Arts degree in Management Supervision and Personnel Management from Central Michigan University in 1981. According to application materials from the other individual, he was president and owner of a steel cleaning, coating, and fabrication company. His application materials showed he obtained a Bachelor of Arts degree in engineering sciences from Dartmouth College in 1975 and had taken master-level courses in finance and engineering at the University of Pittsburgh and at the Illinois Institute of Technology. Application materials from applicants listed alphabetically on the rosters showed they had acquired levels of experience and education that ranged from being a GS-15 director of another agency federal program and having a doctorate degree in engineering, to current SBA employees serving in positions of a lesser grade and responsibility than that of District Director and having various levels of formal education. Case 6: On August 30, 1998, an individual who had previously been a GS-15 Industrial Production Officer in the Department of the Air Force was appointed to the position of District Director of the Sacramento, CA, District Office. Information contained in the appointee’s application materials showed that this individual obtained a Bachelor of Science degree from Wayland College in 1979. The information also showed that the individual’s salary progressed from about $46,000 per year in 1990 when he retired from the Air Force at the rank of Captain; to about $51,500 per year in 1995 as a GS-13 civilian employee of the Air Force; to $75,000 per year in 1997 as a GS-15 Division Chief in the Air Force; and finally to about $82,000 when appointed by SBA to the GS-14, step 8, District Director of the Sacramento, CA, position. SBA conducted a competitive examination in filling this position that began with a vacancy announcement advertising the job as a GS-14/15 position open to all sources. Information regarding the total number of applicants for this position was not included in the materials we reviewed. However, available information showed that SBA established 6 certificates and rosters containing the names of 17 applicants whom SBA had determined to be the best-qualified applicants for the position. The appointee already had competitive service status from his previous employment with the Air Force, and SBA found him qualified for appointment to the District Director position at the GS-14 level. His name was listed on the GS-14 roster in alphabetical order with the names of other status applicants found to be qualified for appointment at that level. The appointee’s application materials included (1) a letter from him to SBA advising that he was to be adversely affected by a reduction in force (RIF) action due to the upcoming closure of his Air Force base and (2) a copy of a memorandum to him from the Air Force advising him of the impending RIF. Of the six certificates and rosters from which a selection could have been made, one was a certificate that contained the names of seven individuals in numerically ranked order who were eligible for appointment to the position at the GS-14 level. The top three candidates on that list included a GS-13 Project Manager employed by the Department of Defense (DOD); a Chief Administrative Officer of a California university school of medicine; and the owner of the steel cleaning, coating, and fabrication company who was also identified as a top candidate for the position discussed in case 5 above. Application materials from the GS-13 DOD Project Manager showed that he obtained a Bachelor of Science degree in journalism from Northwestern University and a Master of Arts degree in economic development from the University of Wisconsin. The dates these degrees were awarded were not shown. The application materials from the Administrative Officer of a California university school of medicine showed that he obtained a Bachelor of Science degree from the University of California in 1972, a Master of Business Administration degree from UCLA in 1976, and a Master in Public Health degree from UCLA in 1976. The materials also showed that he was working on a doctoral degree in public policy when he applied for the District Director position. Five rosters contained the names of 13 individuals, in alphabetical order, including the appointee, who were eligible for promotion to either the GS- 14 or GS-15 level, were eligible for transfer at either the GS-14 or GS-15 level, or were eligible for noncompetitive appointment at the GS-14 level due to Peace Corps service. Application materials from the nonselected applicants on these rosters showed they had acquired levels of experience and education that ranged from being a former GS-15 Deputy Regional Manager at SBA and having a Master of Public Administration degree; to the former Peace Corps Volunteer who also had been a partner in a law firm and, in addition to her Juris Doctorate degree in law, had obtained a Master of Business Administration degree from the University of California at Berkeley in 1983, a Master of Arts degree in Special Education from the University of Northern Colorado in 1975, and a Bachelor of Arts degree in Sociology from Colorado College in 1972. As noted earlier in this appendix, 40 appointments of SBA employees were made to District Director positions between January 1, 1993, and December 4, 1998. In 14 of these appointments, the individuals had recently graduated from SBA’s District Director Candidate Development Program and were noncompetitively appointed to their District Director positions. In nine of these appointments, the individuals had been reassigned from District Director positions in other district offices. In the remaining 17 cases, the individuals had held various other positions at SBA and had either competed for, or were reassigned to, their District Director positions. In order to understand how some of these latter appointments to District Director were made, we examined the circumstances in 2 of the 17 cases. In both cases, on the basis of the information available for our review, it appeared that SBA followed procedures consistent with federal laws and regulations in filling the District Director positions. Case 1: On July 23, 1995, the SBA Deputy Regional Administrator for region I in Boston, MA, was reassigned to the position of District Director of the Boston District Office. The previous District Director retired. According to an SBA HR official familiar with this case, there were two principal reasons for reassigning the Deputy Regional Administrator to the position of District Director of the Boston office. First, as part of SBA’s reorganization and downsizing effort, the number of employees at each of the 10 SBA Regional Offices was being significantly reduced. As a result, there was a need to reassign the Deputy Regional Administrator to another position within SBA. Second, the HR official told us that the Boston District Director vacancy had been advertised to the public, but SBA management was not happy with the quality of the applicants and determined that reassigning the Deputy Regional Administrator to that position was the best option. According to information contained in her application materials for the Deputy Regional Administrator position, this individual had worked for Senator George Mitchell’s Office since 1989 and was losing her job due to the Senator’s retirement. Using her Ramspeck Act eligibility, she obtained a career appointment at SBA as a GS-15, step 6, ($84,791) Deputy Regional Administrator. Her appointment was made on January 22, 1995, just 6 months before she was reassigned to the position of District Director of the Boston District Office. Her grade level and salary remained unchanged when she was reassigned to the District Director position. Since then, this individual has been reassigned again, this time to the position of District Director of the Augusta, ME, District Office. Her application materials showed that she obtained a Bachelor of Arts degree from Merrimack College in 1963. Her replacement as District Director in Boston was an individual who was holding the position of Boston Regional Advocate. This individual was selected for the position of District Director of the Boston office through a competitive examination process after having applied for the position which was advertised as being open to all sources. Case 2: On August 26, 1996, the Regional Advocate for King of Prussia (Philadelphia) was selected for, and appointed to, the position of District Director of the Clarksburg, WV, District Office. In this case, the position was filled through a competitive examination process that began with a vacancy announcement advertising the job as a GS-14/15 position open to all recruiting sources. According to information that was available for our review, a certificate of numerically ranked eligibles was prepared that contained the name of two applicants. The appointee had the highest ranking of the two and was selected and appointed to the position at the GS-15, step 1 level ($72,162). This was about a $2,000 decrease from her salary as Regional Advocate. Limited information on two other applicants was available for our review, but the case file was incomplete and information on any other applicants was not included. According to the appointee’s application materials, she had served as an Account Coordinator and as a Research Assistant in the private sector before becoming Development Administrator for a West Virginia software foundation in 1989. In the latter position she described the foundation as having been started by Senator Byrd of West Virginia, and she listed the Governor of West Virginia as her supervisor. She also claimed to have worked with Senator Byrd’s staff in this position, facilitating subcontracting agreements. Also, according to her application materials, she received a Bachelor of Science degree in journalism in 1987 from West Virginia University and a Master’s degree in administration in 1991, also from West Virginia University. In April 1994 she received a noncompetitive appointment to the position of Regional Advocate in the King of Prussia office. Her appointment to that position was not to exceed May 3, 1995 (13 months), and her salary was set at about $50,300 (equivalent to the GS-13, step 1 level.) In October 1994 her salary was increased to about $61,400 (equivalent to the GS-14, step 2 level); and on May 4, 1995, she was reappointed as Regional Advocate not to exceed May 3, 1996 (13 months). According to her application materials for the District Director position, beginning in July 1995 she served as Acting Regional Administrator for region III, which included King of Prussia as well as Clarksburg, WV, and her salary was temporarily increased to about $72,200 (equivalent to the GS-15, step 1 level.) In December 1995, she was again reappointed as Regional Advocate, not to exceed November 19, 1996. On August 26, 1996, she obtained a career-conditional appointment as GS-15, step 1, District Director of the Clarksburg, WV, District Office. At that time, her salary as Regional Advocate had been about $74,000. Her salary was reduced to the GS-15, step 1 level of about $72,100 at the time of her District Director appointment. The other individual on the certificate of eligibles, who also could have been selected, had appeared on certificates of eligibles for other SBA District Director positions. He was a Director of Logistics at the GS-14 level in the Department of Defense. In addition to this applicant, according to the SBA ERB, a recent graduate of SBA’s District Director Candidate Development Program also expressed interest in the Clarksburg District Director position. ERB, in a memorandum to the SBA Administrator, identified this individual as a very strong candidate but said it believed that the individual would have excellent placement potential in other District Director positions and recommended that the appointee be selected. Using SBA documents we prepared the following tables to provide additional information on interagency details of SBA employees during fiscal years 1992 through 1998. Table IV.1: Reimbursable Details From SBA to Other Agencies During the Period FY 1992 Through FY 1998 Duration (in days) Actual ? Date of Reimburse- ment 7/26/94 ?? ? ????? SBA could not provide sufficient, documented evidence to show the actual length of the detail. SBA officials told us that SBA’s internal control processes regarding interagency details needed improvement. SBA was unable to provide sufficient, documented evidence in most cases that it had received the correct amounts of reimbursements for its employees detailed to other agencies on a reimbursable basis. At the time of our review, SBA officials indicated that the internal control problems were being corrected and that billings to the other agencies were going to be prepared. Duration (in days) Actual ?354 ? Office of the Vice President, Nat’l Performance Review Export-Import Bank Office of the Vice President, Nat’l Performance Review ?? 5. GM-14 Office of the Vice President, Nat’l Performance ? 6. GM-15 Office of the Vice President, Nat’l Performance ? Review Office of the Vice President, Nat’l Performance Review Office of Congressman Michael P. Forbes ?? Executive Office of the President Dept. of the Navy ? ?? SBA could not provide sufficient, documented evidence to show the actual number of days of the detail. No interagency agreement was available for examination in this case, so the planned ending date and the planned number of days of the detail are unknown. Information on this individual’s grade was not available. Executive Assistant: In 1996, the Executive Assistant position existed as an excepted service, Schedule C position and was filled by a GS-13 political appointee. In February 1997, the position was newly established as a competitive service position, which meant that it had to be filled by a career federal employee rather than a political appointee. The political appointee who formerly held the excepted service position resigned. The current incumbent of the new position, a GS-13 career-conditional employee, transferred to SBA from the Office of Federal Housing Enforcement and Oversight (OFHEO), an independent agency within the Department of Housing and Urban Development. At OFHEO, she had been converted from an excepted service appointment to a career-conditional competitive service appointment about 1 month before transferring to SBA. White House Liaison: This position was filled on September 3, 1996, by a Schedule A, GS-14 White House Fellow. The appointment was not to exceed September 4, 1997, and was authorized by 5 C.F.R. 213.3102(z), which permits the appointments of not more than 30 individuals designated by the President to be White House Fellows to positions as assistants to top-level federal officials. On February 15, 1997, the incumbent resigned her position at SBA to accept a position in the Director’s Office at the U.S. Peace Corps. Project Director for Lender Oversight: This position was newly established in September 1997 as an SES position and was filled by a political appointee under a limited-term SES appointment not to exceed 36 months. According to the job description, the incumbent would be responsible, in part, for modifying the existing interface between SBA, financial institutions, and trade groups in order to ensure that financial transactions were improved, operating efficiently, and in place prior to the 21st century. Prior to this appointment, the incumbent had held an excepted service appointment at OFHEO. According to SBA’s HR officials, in June 1998 the incumbent returned to that agency to accept a permanent, part-time position. The Project Director for Lender Oversight position remained vacant as of December 1998. Special Assistant: This position was newly established in November 1997 as an excepted service position and was filled by a political appointee under a Schedule C, GS-13 appointment. Prior to this SBA appointment, the individual had been a Staff Assistant at the White House Office. Staff Assistant: This position was newly established in February 1997 as a competitive service position, and, as had been done with the Executive Assistant position, the incumbent was converted from an excepted service appointment at OFHEO to a career-conditional competitive service appointment about 1 month before transferring to this SBA position. Senior Advisor: This position was a Schedule C, GS-15 position filled by a political appointee. The political appointee resigned from SBA in November 1996, and the position was eliminated thereafter. Program Support: This position was established as a competitive service position in January 1997 and was filled by a GS-9 SBA employee who was reassigned from a Program Support Specialist position in SBA’s Office of Communications and Public Liaison. Special Assistant: This position was newly established in October 1997 as an excepted service position and was filled by a Schedule C, GS-14 political appointee who had been in a similar political appointee position at the Department of Housing and Urban Development. In 1996 a different Special Assistant position existed as a competitive service position and was filled by a GS-7 career employee. The employee was reassigned within the same Office to a secretary position (see Staff Assistant positions below in this section). Receptionist: This position was vacant in 1996 and was eliminated thereafter. Deputy Chief of Staff: This position was newly created in September 1997 as an SES position. In October 1997, an individual who had been serving as an Office of Advocacy Regional Advocate in San Francisco was converted to a limited-term SES appointment and placed into the Deputy Chief of Staff position. The following year, in October 1998, this individual became the Chief of Staff, and the Deputy Chief of Staff position became vacant. It remained vacant as of December 1998. Staff Assistant: Two competitive service staff assistant (secretary) positions were filled by SBA career employees who were reassigned from other SBA positions. One was reassigned in May 1996 from the Special Assistant position in this Office, and the other was reassigned in February 1997 from an Automation Assistant position within the Office of the Administrator. Chief Operating Officer: This position was newly established in August 1997 as an SES position and was filled by a career SES employee who transferred to this position from the Justice Department’s Immigration and Naturalization Service in September 1997. About 1 year later, this individual was detailed to the Office of Management and Budget. According to an SBA HR official, a new Chief Operating Officer has been selected and was to soon be appointed. She is a career senior executive and will be transferring to SBA from the Immigration and Naturalization Service. Program Support Specialist: The incumbent of this GS-9 position resigned from SBA in August 1997 to accept a position in a local public school system. The position was eliminated thereafter. Program Analyst: Two Program Analyst positions were eliminated during the period 1996 through 1998. The career employee incumbents of those positions were reassigned to other SBA positions. One was reassigned to a Deputy District Director position in September 1996, and the other was reassigned to a position in SBA’s Office of Congressional and Legislative Affairs. Associate Director: This position was newly established in October 1997 as an excepted service position and was filled by a Schedule C, GS-12 political appointee who, just prior to this appointment, had been a GS-11 Special Assistant political appointee in SBA’s Office of Communications and Public Liaison. Office Automation Assistant: This position was newly created as a GS-6 excepted service position in August 1997 and was filled on a temporary basis by a newly hired SBA employee. In 1998 the employee was promoted to a GS-7 Office Automation Assistant and converted to a career- conditional appointment. The following are GAO’s comments on SBA’s April 1, 1999, letter. 1. SBA said that we were incorrect in including Schedule A excepted service appointees, appointees whose pay rates were administratively determined, and limited term SES appointees in our definition of political appointees. We recognize that Schedule A appointees, appointees whose pay rates are administratively determined, and limited term SES appointees are not traditionally recognized as political appointees. However, such appointments share certain characteristics with traditionally recognized political appointees, such as Schedule C appointees and noncareer SES appointees. Among other things, such appointments can be made noncompetitively as are traditional political appointments, and the appointees serve at the pleasure of the agency head as do traditional political appointees. Because of this sharing of characteristics, we included Schedule A appointees, administratively determined pay rate appointees, and limited term SES appointees in our definition of political appointee. Although SBA is correct that Schedule A appointees can include students and the disabled as well as attorneys, the Schedule A appointees included in this report were all attorneys. 2. SBA said that a statement we attribute to a SBA official that she was unaware of any agency that uses recruitment bonuses was incorrect. The official believes she told us that she did not have any information on the effectiveness of recruitment bonuses in any other agencies. We understood from an interview two of our staff held with the official that she was unaware of the use of recruitment bonuses by other agencies. Nevertheless, we have modified the language in the report. Elsewhere in its comments SBA pointed out that it has developed a policy for the use of recruitment bonuses, and we commend SBA for that effort. 3. While commenting on a draft of this report, SBA found and provided missing documentation for the advanced salary setting for an employee hired as a district director. We reviewed the documentation now supplied by SBA and have changed that case in this report. Now, documented justification of the salary setting was provided for all six outside hired District Director appointments that we reviewed. 4. We believe it would be inappropriate to delete the words “appeared to” because our work was based only on the evidence that was available. Further, we do not believe the wording detracts from giving SBA credit for the subject personnel actions. 5. SBA’s comment pertains to our conclusions. It objected to our phrasing of a sentence that says that SBA usually followed appropriate laws and regulations in setting the salaries of political appointees. Elsewhere on the same page we made the statement that with the exception of one case, SBA followed applicable federal laws and regulations when setting the starting salaries. SBA believes the qualifying use of the term “usually” is inappropriate because only one case was cited as an exception. The two statements that SBA referred to were in the conclusion section of our draft report, and we believe “usually” is an appropriate characterization. The statements refer to two different groups of employees. For one group, we were referring to the pay setting of six district director appointees. For the second group, we were referring to the advanced pay setting of 141 political appointees. In our draft report, we said that SBA could not provide documentation justifying the advanced pay setting for 1 of 6 district directors and for 31 of the political appointees. While commenting of the draft, SBA found and provided missing documentation for the 1 district director and for several of the 31 political appointees. We have changed the report accordingly. However, supporting documentation is still missing for the advanced pay rates for 11 political appointees, and we believe our characterization that SBA usually followed appropriate federal laws and regulations is accurate. Personnel Practices: Career Appointments of Former Political and Congressional Employees (GAO/GGD-97-165, Sept. 2, 1997). Personnel Practices: Improper Personnel Actions on Selected CPSC Appointments (GAO/GGD-97-131, June 27, 1997). Federal Recruiting and Hiring: Authority for Higher Starting Pay Useful but Guidance Needs Improvement (GAO/GGD-91-22, Sept. 10, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed events related to personnel reassignments, appointments, and activities at the Small Business Administration (SBA). GAO noted that: (1) SBA made hundreds of appointments during the 1990s using competitive procedures for career appointments, special noncompetitive hiring authority for the Office of Advocacy, and political appointment procedures for other appointments; (2) for the appointments that GAO reviewed, GAO found that SBA adhered to the different procedural requirements; (3) for example, although 6 of the 42 District Director appointees were hired from outside SBA to the career positions, and 2 of them had political backgrounds, GAO found nothing procedurally amiss in the hiring process; (4) GAO determined, however, that in several cases SBA did not follow applicable federal regulations when setting the starting salary at a rate higher than the minimum rate for the grade; (5) SBA could not provide the documentation required by federal regulations to support the advanced salary settings; (6) SBA has developed draft procedures that, if properly implemented, should help prevent such situations from recurring; (7) GAO also determined that SBA poorly controlled its interagency detailing of employees, with the result that cost reimbursements were not always being collected; and (8) SBA officials advised GAO that they are developing new procedures to better control interagency details and collections of reimbursements. |
In September 2003, we reported that the Army and Air Force did not comply with DOD’s force health protection and surveillance requirements for many servicemembers deploying in support of OEF in Central Asia and OJG in Kosovo at the installations we visited. Specifically, our review disclosed problems with the Army and Air Force’s implementation of DOD’s force health protection and surveillance requirements in the following areas: Deployment health assessments. Significant percentages of Army and Air Force servicemembers were missing one or both of their pre- and post-deployment health assessments and, when health assessments were conducted, as many as 45 percent of them were not done within the required time frames. Immunizations and other pre-deployment requirements. Based on the documentation we reviewed, as many as 46 percent of servicemembers in our samples were missing one of the pre-deployment immunizations required, and as many as 40 percent were missing a current tuberculosis screening at the time of their deployment. Up to 29 percent of the servicemembers in our samples had blood samples in the repository older than the required limit of 1 year at the time of deployment. Completeness of medical records and centralized data collection. Servicemembers’ permanent medical records at the Army and Air Force installations we visited did not always include documentation of the completed health assessments that we found at AMSA and at the U.S. Special Operations Command. In one sample, 100 percent of the pre-deployment health assessments were not documented in the servicemember medical records that we reviewed. Furthermore, our review disclosed that the AMSA database was lacking documentation of many health assessments and immunizations that we found in the servicemembers’ medical records at the installations visited. We also wrote in our 2003 report that DOD did not have oversight of departmentwide efforts to comply with health surveillance requirements. There was no effective quality assurance program at the Office of the Assistant Secretary of Defense for Health Affairs or at the Offices of the Surgeons’ General of the Army or Air Force that helped ensure compliance with force health protection and surveillance policies. We believed that the lack of such a system was a major cause of the high rate of noncompliance we found at the installations we visited, and thus recommended that the department establish an effective quality assurance program to ensure that the military services comply with the force health protection and surveillance requirements for all servicemembers. The department concurred with our recommendation. “(a) SYSTEM REQUIRED—The Secretary of Defense shall establish a system to assess the medical condition of members of the armed forces (including members of the reserve components) who are deployed outside the United States or its territories or possessions as part of a contingency operation (including a humanitarian operation, peacekeeping operation, or similar operation) or combat operation. “(b) ELEMENTS OF SYSTEM—The system described in subsection (a) shall include the use of predeployment medical examinations and postdeployment medical examinations (including an assessment of mental health and the drawing of blood samples) to accurately record the medical condition of members before their deployment and any changes in their medical condition during the course of their deployment. The postdeployment examination shall be conducted when the member is redeployed or otherwise leaves an area in which the system is in operation (or as soon as possible thereafter). “(c) RECORDKEEPING—The results of all medical examinations conducted under the system, records of all health care services (including immunizations) received by members described in subsection (a) in anticipation of their deployment or during the course of their deployment, and records of events occurring in the deployment area that may affect the health of such members shall be retained and maintained in a centralized location to improve future access to the records. “(d) QUALITY ASSURANCE—The Secretary of Defense shall establish a quality assurance program to evaluate the success of the system in ensuring that members described in subsection (a) receive predeployment medical examinations and postdeployment medical examinations and that the recordkeeping requirements with respect to the system are met.” As set forth above, these provisions require the use of pre-deployment and post-deployment medical examinations to accurately record the medical condition of servicemembers before deployment and any changes during their deployment. In a June 30, 2003, correspondence with GAO, the Assistant Secretary of Defense for Health Affairs stated that “it would be logistically impossible to conduct a complete physical examination on all personnel immediately prior to deployment and still deploy them in a timely manner.” Therefore, DOD required both pre- and post-deployment health assessments for servicemembers who deploy for 30 or more continuous days to a land-based location outside the United States without a permanent U.S. military treatment facility. Both assessments use a questionnaire designed to help military healthcare providers in identifying health problems and providing needed medical care. The pre-deployment health assessment is generally administered at the home station before deployment, and the post-deployment health assessment is completed either in theater before redeployment to the servicemember’s home unit or shortly upon redeployment. As a component of medical examinations, the statute quoted above also requires that blood samples be drawn before and after a servicemember’s deployment. DOD Instruction 6490.3, August 7, 1997, requires that a pre-deployment blood sample be obtained within 12 months of the servicemember’s deployment. However, it requires the blood samples be drawn upon return from deployment only when directed by the Assistant Secretary of Defense for Health Affairs. According to DOD, the implementation of this requirement was based on its judgment that the Human Immunodeficiency Virus serum sampling taken independent of deployment actions is sufficient to meet both pre- and post-deployment health needs, except that more timely post-deployment sampling may be directed when based on a recognized health threat or exposure. Prior to April 2003, DOD did not require a post-deployment blood sample for servicemembers supporting the OEF and OJG deployments. In April 2003, DOD revised its health surveillance policy for blood samples and post-deployment health assessments. Effective May 22, 2003, the services were required to draw a blood sample from each redeploying servicemember no later than 30 days after arrival at a demobilization site or home station. According to DOD, this requirement for post-deployment blood samples was established in response to an assessment of health threats and national interests associated with current deployments. The department also revised its policy guidance for enhanced post-deployment health assessments to gather more information from deployed servicemembers about events that occurred during a deployment. More specifically, the revised policy requires that a trained health care provider conduct a face-to-face health assessment with each returning servicemember to ascertain (1) the individual’s responses to the health assessment questions on the post-deployment health assessment form; (2) the presence of any mental health or psychosocial issues commonly associated with deployments; (3) any special medications taken during the deployment; and (4) concerns about possible environmental or occupational exposures. The overall record of the military services in meeting force health protection and surveillance system requirements for OIF was mixed and varied by service, by installation visited, and by specific policy requirement; however, our data shows much better compliance with these requirements in the Army and Air Force installations we reviewed compared to the installations in our earlier review of OEF/OJG. Of the installations reviewed for this report, the Marine Corps generally had lower levels of compliance than the other services. None of the services fully complied with all of the force health protection and surveillance system requirements, which include completing pre- and post-deployment health assessments, receipt of immunizations, and meeting pre-deployment requirements related to tuberculosis screening and pre and post-deployment blood samples. Also, the services did not fully comply with requirements that servicemembers’ permanent medical records include required health-related information, and that DOD’s centralized database includes documentation of servicemember health-related information. Servicemembers in our review at the Army and Air Force installations were generally missing small percentages of pre-deployment health assessments, as shown in figure 1. In contrast, pre-deployment health assessments were missing for an estimated 63 percent of the servicemembers at one Marine Corps installation and for 27 percent at the other Marine Corps installation visited. Similarly, the Navy installation we visited was missing pre-deployment health assessments for about 24 percent of the servicemembers; however, we note that the pre-deployment health assessments reviewed for Navy servicemembers were completed prior to June 1, 2003, and may not reflect improvements arising from increased emphasis following our prior review of the Army and Air Force’s compliance for OEF/OJG. At three Army installations we visited, we also analyzed the extent to which pre-deployment health assessments were completed for those servicemembers who re-deployed back to their home unit after June 1, 2003. Servicemembers associated with these re-deployment samples deployed in support of OIF prior to June 1, 2003. For two of these Army installations—Fort Eustis and Fort Campbell—we estimate that less than 1 percent of the servicemembers were missing pre-deployment health assessments. However, approximately 39 percent of the servicemembers that redeployed back to Fort Lewis on or after June 1, 2003, were missing their pre-deployment health assessments. Post-deployment health assessments were missing for small percentages of servicemembers, except at one of the Marine Corps installations we visited, as shown in figure 2. Although the Army provides for waivers for longer time frames, DOD policy requires that servicemembers complete a pre-deployment health assessment form within 30 days of their deployment and a post-deployment health assessment form within 5 days upon redeployment back to their home station. For consistency and comparability between services, our analysis uses the DOD policy for reporting results. These time frames were established to allow time to identify and resolve any health concerns or problems that may affect the ability of the servicemember to deploy, and to promptly identify and address any health concerns or problems that may have arisen during the servicemember’s deployment. For servicemembers that had completed pre-deployment health assessments, we found that many assessments were not completed on time in accordance with requirements. More specifically, we estimate that pre-deployment health assessments were not completed on time for: 47 percent of the pre-deployment health assessments for the active duty servicemembers at Fort Lewis; 41 percent of the pre-deployment health assessments for the active duty servicemembers and for 96 percent of the Army National Guard unit at Fort Campbell; and 43 percent of the pre-deployment health assessments at Camp Lejeune and 29 percent at Camp Pendleton. For the most part, small percentages—ranging from 0 to 5 percent—of the post-deployment health assessments were not completed on time at the installations visited. The exception was at Fort Lewis, where we found that about 21 percent of post-deployment health assessments for servicemembers were not completed on time. DOD policy also requires that pre-deployment and post-deployment health assessments are to be reviewed immediately by a health care provider to identify any medical care needed by the servicemember. Except for servicemembers at one of the two Marine Corps installations visited, only small percentages of the pre- and post-deployment health assessments, ranging from 0 to 6 percent, were not reviewed by a health care provider. At Camp Pendleton, we found that a health care provider did not review 33 percent of the pre-deployment health assessments and 21 percent of the post-deployment health assessments for its servicemembers . Noncompliance with the requirements for pre-deployment health assessments may result in servicemembers with existing health problems or concerns being deployed with unaddressed health problems. Also, failure to complete post-deployment health assessments may risk a delay in obtaining appropriate medical follow-up attention for a health problem or concern that may have arisen during or following the deployment. Based on our samples, the services did not fully meet immunization and other health requirements for OIF deployments, although all servicemembers in our sample had received at least one anthrax immunization before they returned from the deployment as required. Almost all of the servicemembers in our samples had a pre-deployment blood sample in the DOD Serum Repository but frequently the blood sample was older than the one-year requirement. The services’ record in regard to post-deployment blood sample draws was mixed. The U.S. Central Command required the following pre-deployment immunizations for all servicemembers who deployed to Southwest Asia in support of OIF: hepatitis A (two-shot series); measles, mumps, and rubella; polio; tetanus/diphtheria within the last 10 years; typhoid within the last 5 years; and influenza within the last 12 months. Based on the documentation we reviewed, the estimated percent of servicemembers receiving all of the required pre-deployment immunizations ranged from 52 percent to 98 percent at the installations we visited (see fig. 3). The percent of servicemembers missing only one of the pre-deployment immunizations required for the OIF deployment ranged from 2 percent to 43 percent at the installations we visited. Furthermore, the percent of servicemembers missing 2 or more of the required immunizations ranged from 0 percent to 11 percent. Figure 4 indicates that 3 to about 64 percent of the servicemembers at the installations visited were missing a current tuberculosis screening at the time of their deployment. A tuberculosis screening is deemed “current” if it occurred within 1 year prior to deployment. Specifically, the Army, Navy, and Marine Corps required servicemembers deploying to Southwest Asia in support of OIF to be screened for tuberculosis within 12 months of deployment. The Air Force requirement for tuberculosis screening depends on the servicemember’s occupational specialty; therefore we did not examine tuberculosis screening for servicemembers in our sample at Moody Air Force Base due to the difficulty of determining occupational specialty for each servicemember. Although not required as pre-deployment immunizations, U.S. Central Command policies require that servicemembers deployed to Southwest Asia in support of OIF receive a smallpox immunization and at least one anthrax immunization either before deployment or while in theater. For the servicemembers in our samples at the installations visited, we found that all of the servicemembers received at least one anthrax immunization in accordance with the requirement. Only small percentages of servicemembers at two of the three Army installations, the Air Force installation, and the Navy installation visited did not receive the required smallpox immunization. However, an estimated 18 percent of the servicemembers at Fort Lewis, 8 percent at Camp Lejeune, and 27 percent at Camp Pendleton did not receive the required smallpox immunization. U.S. Central Command policies also require that deploying servicemembers have a blood sample in the DOD Serum Repository not older than 12 months prior to deployment. Almost all of the servicemembers in our review had a pre-deployment blood sample in the DOD Serum Repository, but frequently the blood samples were older than the 1-year requirement. As shown in table 1 below, 14 percent of servicemembers at Camp Pendleton had blood samples in the repository older than 1 year. Effective May 22, 2003, the services were required to draw a post-deployment blood sample from each re-deploying servicemember no later than 30 days after arrival at a demobilization site or home station. Only small percentages of the servicemembers at the Army and Air Force installations visited did not have a post-deployment blood sample drawn. The Navy and Marine Corps installations visited had percentages of servicemembers missing post-deployment blood samples ranging from 7 to 19 percent, and the post-deployment blood samples that were available were frequently drawn later than required, as shown in table 2. DOD policy requires that the original completed pre-deployment and post-deployment health assessment forms be placed in the servicemember’s permanent medical record and that a copy be forwarded to AMSA. Also, the military services require that all immunizations be documented in the servicemember’s medical record. Figure 5 shows that small percentages of the completed health assessments we found at AMSA for servicemembers in our samples were not documented in the servicemember’s permanent medical record, ranging from 0 to 14 percent for pre-deployment health assessments and from 0 percent to 20 percent for post-deployment health assessments. Almost all of the immunizations we found at AMSA for servicemembers in our samples were documented in the servicemember’s medical record. Service policies also require documentation in the servicemember’s permanent medical records of all visits to in-theater medical facilities. At six of the seven installations we visited, we sampled and examined whether selected in-theater visits to medical providers—such as battalion aid stations for the Army and Marine Corps and expeditionary medical support for the Air Force—were documented in the servicemember’s permanent medical record. Both the Air Force and Navy installations used automated systems for recording servicemember in-theater visits to medical facilities. While in-theater visits were documented in these automated systems, we found that 20 of the 40 Air Force in-theater visits we examined at Moody Air Force Base and 6 of the 60 Navy in-theater visits we examined at the Naval Construction Battalion Center were not also documented in the servicemembers’ permanent medical records. In contrast, the Army and Marine Corps installations used manual patient sign-in logs for servicemembers’ visits to in-theater medical providers and relied exclusively on paper documentation of the in-theater visits in the servicemember’s permanent medical record. The results of our review are summarized in table 3. Army and Marine Corps representatives associated with the battalion aid stations we examined commented that the aid stations were frequently moving around the theater, increasing the likelihood that paper documentation of the visits might get lost and that such visits might not always be documented because of the hostile environment. The lack of complete and accurate medical records documenting all medical care for the individual servicemember complicates the servicemember’s post-deployment medical care. For example, accurate medical records are essential for the delivery of high-quality medical care and important for epidemiological analysis following deployments. According to DOD health officials, the lack of complete and accurate medical records complicated the diagnosis and treatment of servicemembers who experienced post-deployment health problems that they attributed to their military service in the Persian Gulf in 1990-91. DOD’s Theater Medical Information Program (TMIP) has the capability to electronically record and store in-theater patient medical encounter data. However, the Iraq war has delayed implementation of the program. At the request of the services, the operational test and evaluation for TMIP has been delayed until the second quarter of fiscal year 2005. In addition to the above requirements, Public Law 105-85, 10 U.S.C. 1074f, requires the Secretary of Defense to retain and maintain health-related records in a centralized location for servicemembers who are deployed. This includes records for all medical examinations conducted to ascertain the medical condition of servicemembers before deployment and any changes during their deployment, all health care services (including immunizations) received in anticipation of deployment or during the deployment, and events occurring in the deployment area that may affect the health of servicemembers. A February 2002 Joint Staff memorandum requires the services to forward a copy of the completed pre-deployment and post-deployment health assessments to AMSA for centralized retention. Figure 6 shows the estimated percentage of pre- and post-deployment health assessments in servicemembers’ medical records that were not available in a centralized database at AMSA. Our samples of servicemembers at the installations visited show wide variation by installation regarding pre-deployment health assessments missing from the centralized database, ranging from zero at Fort Lewis to all of the assessments at Camp Lejeune. Post-deployment health assessments were missing for small percentages of servicemembers at the installations visited, except at the Marine Corps installations visited. More specifically, about 26 percent of the post-deployment health assessments at Camp Lejeune and 24 percent at Camp Pendleton were missing from the centralized database. Immunizations missing from the centralized database that we found in the servicemembers’ medical records ranged from 3 to 44 percent for the servicemembers in our samples. DOD officials believe that automation of deployment health assessment forms and recording of servicemember immunizations will improve the completeness of deployment data in the AMSA centralized database, and DOD has ongoing initiatives to accomplish these goals. DOD is currently implementing worldwide a comprehensive electronic medical records system, known as the Composite Health Care System II, which includes pre- and post-deployment health assessment forms and the capability to electronically record immunizations given to servicemembers. Also, the Assistant Secretary of Defense for Health Affairs has established a Deployment Health Task Force whose focus includes improving the electronic capture of deployment health assessments. According to DOD, about 40 percent of the Army’s pre-deployment health assessments and 50 percent of the post-deployment health assessments sent to AMSA since June 1, 2003, were submitted electronically. DOD officials believe that the electronic automation of the deployment health-related information will lessen the burden of installations in forwarding paper copies and the likelihood of information being lost in transit. Although the number of installations we visited was limited and different than those in our previous review with the exception of Fort Campbell, the Army and Air Force compliance with force health protection and surveillance policies for active-duty servicemembers in OIF appears to be better than for those installations we reviewed for OJG and OEF. To provide context, we compared overall data from Army and Air Force active duty servicemembers’ medical records reviewed for OEF/OJG with OIF, by aggregating data from all records examined in these two reviews to provide some perspective and determined that: Lower percentages of Army and Air Force servicemembers were missing pre- and post-deployment health assessments for OIF. Higher percentages of Army and Air Force servicemembers received required pre-deployment immunizations for OIF. Lower percentages of deployment health-related documentation were missing in the servicemembers’ permanent medical records and at DOD’s centralized database for OIF. Because our previous report on compliance with requirements for OEF and OJG focused only on the Army and Air Force, we were unable to make comparisons for the Navy and Marine Corps. Our data indicate that Army and Air Force compliance with requirements for completion of pre- and post-deployment health assessments for servicemembers for OIF appears to be much better than compliance for OEF and OJG for the installations examined in each review. In some cases, the services were in full compliance. As before, we aggregated data from all records examined in the two reviews and determined, among the Army and Air Force active duty servicemembers we reviewed for OIF compared to those reviewed for OEF/OJG, the following: Army servicemembers missing pre-deployment health assessments was an average of 14 percent for OIF contrasted with 45 percent for OEF/OJG. Air Force servicemembers missing pre-deployment health assessments was 8 percent for OIF contrasted with an average of 50 percent for OEF/OJG. Army servicemembers missing post-deployment health assessments was 0 percent for OIF contrasted with an average of 29 percent for OEF/OJG. Air Force servicemembers missing post-deployment health assessments was 4 percent for OIF contrasted with an average of 62 percent for OEF/OJG. Based on our samples, the Army and the Air Force had better compliance with pre-deployment immunization requirements for OIF as compared to OEF and OJG. The aggregate data from each of our OIF samples indicates that an average of 68 percent of Army active duty servicemembers received all of the required immunizations before deploying for OIF, contrasted with an average of only 35 percent for OEF and OJG. Similarly, 98 percent of Air Force active duty servicemembers received all of the required immunizations before deploying for OIF, contrasted with an average of 71 percent for OEF and OJG. The percentage of Army active duty and Air Force servicemembers missing two or more immunizations appears to be markedly better, as illustrated in table 4. Our data indicate that the Army and Air Force’s compliance with requirements for completeness of servicemember medical records and of DOD’s centralized database at AMSA for OIF appears to be significantly better than compliance for OEF and OJG. Lower overall percentages of deployment health-related documentation were missing in servicemembers’ permanent medical records and at AMSA. We aggregated the data from each of our samples and depicted the results in tables 5 and 6. The data appear to indicate that, for active duty servicemembers, the Army and the Air Force have made significant improvements in documenting servicemember medical records. These data also appear to indicate that, overall, both services have also made encouraging improvements in retaining health-related records in DOD’s centralized database at AMSA, although not quite to the extent exhibited in their efforts to document servicemember medical records. In response to congressional mandates and a GAO recommendation, DOD established a deployment health quality assurance program in January 2004 to ensure compliance with force health protection and surveillance requirements and implementation of the program is ongoing. DOD officials believe that their quality assurance program has improved the services’ compliance with requirements. However, we did not evaluate the effectiveness of DOD’s deployment health quality assurance program because of the relatively short time of its implementation. Section 765 of Public Law 105-85 (10 U.S.C. 1074f) requires the Secretary of Defense to establish a quality assurance program to evaluate the success of DOD’s system for ensuring that members receive pre-deployment medical examinations and post-deployment medical examinations and that recordkeeping requirements are met. In May 2003, the House Committee on Armed Services directed the Secretary of Defense to take measures to improve oversight and compliance with force health protection and surveillance requirements. Specifically, in its report accompanying the Fiscal Year 2004 National Defense Authorization Act, the Committee directed the Secretary of Defense to establish a quality control program to assess implementation of the force health protection and surveillance program. In January 2004, the Assistant Secretary of Defense for Health Affairs issued policy and program guidance for the DOD Deployment Health Quality Assurance Program. DOD’s quality assurance program requires: Periodic reporting on pre- and post-deployment health assessments. AMSA is required to provide (at a minimum) monthly reports to the Deployment Health Support Directorate (Directorate) on deployment health data. AMSA is providing the Directorate and the services with weekly reports on post-deployment health assessments and publishes bi- monthly updates on pre- and post-deployment health assessments. Periodic reporting on service-specific deployment health quality assurance programs. The services are required to report (at a minimum) quarterly reports to the Directorate on the status and findings of their respective required deployment health quality assurance programs. Each service has provided the required quarterly reports on its respective quality assurance programs. Periodic visits to military installations to assess deployment health programs. The program requires joint visits by representatives from the Directorate and from service medical departments to military installations for the purpose of validating the service’s deployment health quality assurance reporting. As of September 2004, Directorate officials had accompanied service medical personnel to an Army, Air Force, and Marine Corps installation for medical records review. Directorate officials envision continuing quarterly installation visits in 2005, with possible expansion to include reserve and guard sites. The services are at different stages of developing their deployment quality assurance programs. Following the issuance of our September 2003 report and subsequent testimony before the House Committee on Veterans’ Affairs in October 2003, the Surgeon General of the Army directed that the U.S. Army Center for Health Promotion and Preventive Medicine (the Center) lead reviews of servicemember medical records at selected Army installations to assess compliance with force health protection and surveillance requirements. As of September 2004, the Center had conducted reviews at 10 Army installations. Meanwhile, the Center developed the Army’s deployment health quality assurance program that parallels closely the DOD’s quality assurance program. According to a Center official, this quality assurance program is currently under review by the Surgeon General. In the Air Force, public health officers at each installation report monthly compliance rates with force health protection and surveillance requirements to the office of the Surgeon General of the Air Force. These data are monitored by officials in the office of the Air Force Surgeon General for trends and for identification of potential problems. Air Force Surgeon General officials told us that, as of May 2004, the Air Force Inspector General’s periodic health services inspections—conducted every 18 to 36 months at each Air Force installation—includes an examination of compliance with deployment health surveillance requirements. Also, the Air Force Audit Agency is planning to examine in 2004 whether AMSA received all of the required deployment health assessments and blood samples for servicemembers who deployed from several Air Force installations. According to an official in the office of the Surgeon General of the Navy, no decisions have been reached regarding whether periodic audits of servicemember medical records will be conducted to assess compliance with DOD requirements. DOD’s April 2003 enhanced post-deployment health assessment program expanded the requirement for post-deployment health assessments and post-deployment blood samples to all sea-based personnel in theater supporting combat operations for Operations Iraqi Freedom and Enduring Freedom. Navy type commanders (e.g., surface ships, submarine, and aircraft squadrons) are responsible for implementing the program. The Marine Corps has developed its deployment health assessment quality assurance program that is now under review by the Commandant of the Marine Corps. It reemphasizes the requirements for deployment health assessments and blood samples and requires each unit to track and report the status of meeting these requirements for their servicemembers. At the installations we visited, we observed that the Army and Air Force had centralized quality assurance processes in place that extensively involved installation medical personnel examining whether DOD’s force health protection and surveillance requirements were met for deploying/ redeploying servicemembers. In contrast, we observed that the Marine Corps installations did not have well-defined quality assurance processes for ensuring that the requirements were met for servicemembers. The Navy installation visited did not have a formal quality assurance program; compliance depended largely on the initiative of the assigned medical officer. We believe that the lack of effective quality assurance processes at the Marine Corps installations contributed to lower rates of compliance with force health protection and surveillance requirements. In our September 2003 report, we recommended that DOD establish an effective quality assurance program and we continue to believe that implementation of such a program could help the Marine Corps improve its compliance with force health protection and surveillance requirements. In commenting on a draft of this report, the Assistant Secretary of Defense for Health Affairs concurred with the findings of the report. He suggested that the word “Appears” be removed from the title of the report to more accurately reflect improvements in compliance with force health protection and surveillance requirements for OIF. We do not agree with this suggestion because the number of installations we visited for OIF was limited and different than those in our previous review for OEF/OJG with the exception of Fort Campbell. As pointed out in the report, the data for OIF were limited in some instances to only one sample at one installation. We believe that it is important for the reader to recognize the limitations of this comparison. The Assistant Secretary also commented that the department is aware of variations in progress among the services and is committed to demonstrating full compliance through the continued application of aggressive quality assurance measures. He further commented that the department is focusing on and supporting recent policy efforts by the Marine Corps to improve its deployment health quality assurance program. He commented that plans have been initiated to conduct a joint quality assurance visit to Camp Pendleton, Calif., in early 2005, following the implementation of an improved quality assurance program and the return of significant numbers of Marines currently deployed in support of OIF. The department’s written comments are incorporated in their entirety in appendix II. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army, the Air Force, and the Navy; and the Commandant of the Marine Corps. We will also make copies available to others upon request. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me on (202) 512-5559 or Clifton Spruill on (202) 512-4531. Key contributors to this report are listed in appendix III. To meet our objectives, we interviewed responsible officials and reviewed pertinent documents, reports, and information related to force health protection and deployment health surveillance requirements obtained from officials at the Office of the Assistant Secretary of Defense for Health Affairs; the Deployment Health Support Directorate; the National Guard Bureau; and the Offices of the Surgeons General for the Army, Air Force, and Navy Headquarters in the Washington, D.C., area. We also performed additional work at AMSA and the U.S. Central Command. To determine the extent to which the military services were meeting the Department of Defense’s (DOD) force health protection and surveillance requirements for servicemembers deploying in support of Operation Iraqi Freedom (OIF), we identified DOD’s and each service’s overall deployment health surveillance policies. We also obtained the specific force health protection and surveillance requirements applicable to all servicemembers deploying to Southwest Asia in support of OIF required by the U.S. Central Command. We tested the implementation of these requirements at selected Army, Air Force, Marine Corps, and Navy installations. To identify military installations within each service where we would test implementation of the policies, we reviewed deployment data showing the location of units, by service and by military installation that deployed to Southwest Asia in support of OIF or redeployed from Southwest Asia in support of OIF from June 1, 2003, through November 30, 2003. After examining these data, we selected the following military installations for review of selected servicemembers’ medical records, because the installations had amongst the largest numbers of servicemembers who deployed or re-deployed back to their home unit from June 1, 2003, through November 30, 2003: Fort Lewis, Wash. Fort Campbell, Ky. Fort Eustis, Va. Camp Lejeune, N.C. Camp Pendleton, Calif. Moody Air Force Base, Ga. Naval Construction Battalion Center, Gulfport, Miss. In comparing compliance rates for OIF with those for Operation Enduring Freedom (OEF) and Operation Joint Guardian (OJG), we reviewed active duty servicemembers’ medical records for Army servicemembers and Air Force servicemembers at selected installations. For OIF, we reviewed active duty Army servicemembers’ medical records at Fort Campbell and Fort Lewis and active duty Air Force servicemembers at Moody Air Force Base. For OEF and OJG, we reviewed active duty Army servicemembers’ medical records at Fort Drum and Fort Campbell and active duty Air Force servicemembers at Travis Air Force Base and Hurlburt Field. Due to the length of Army deployments in support of OIF, we sampled two groups at the military installations consisting of (1) servicemembers who deployed within the selected time frame and (2) servicemembers who re- deployed back to their home unit within the selected time frame. For the selected military installations, we requested officials in the Deployment Health Support Directorate, in the services’ Surgeon General offices, or at the installations to provide a listing of those active-duty servicemembers who deployed to Southwest Asia in support of OIF for 30 or more continuous days to areas without permanent U.S. military treatment facilities or redeployed back to the military installation from June 1, 2003, through November 30, 2003. For Army reserve and National Guard servicemembers, we requested listings of those servicemembers who deployed during the period June 1, 2003, through January 31, 2004, and those servicemembers who redeployed from Southwest Asia in support of OIF from June 1, 2003, through December 31, 2003. For Marine Corps servicemembers at Camp Lejeune and Camp Pendleton, we modified our selection criteria to draw one sample because a number of servicemembers met the definition for both deployment and redeployment within our given time frames. Specifically, servicemembers at these installations had both deployed to Southwest Asia in support of OIF and redeployed back to their home unit from June 1, 2003, through November 30, 2003, staying for 30 or more continuous days. For our medical records review, we selected samples of servicemembers at the selected installations. Five of our servicemember samples were small enough to complete reviews of the entire universe of medical records for the respective location. For the other locations, we drew probability samples from the larger universe. In all cases, records that were not available for review were researched in more detail by medical officials to account for the reason for which the medical record was not available so that the record could be deemed either in-scope or out-of- scope. For installations in which a sample was drawn, all out-of-scope cases were then replaced with another randomly selected record until the required sample size was met. For installations in which the universe was reviewed, the total number in the universe was adjusted accordingly. There were four reasons for which a medical record was unavailable and subsequently deemed out-of-scope for purposes of this review: 1. Charged to patient. When a patient goes to be seen in clinic (on-post or off-post), the medical record is physically given to the patient. The procedure is that the medical record will be returned following their clinic visit. 2. Expired term of service. Servicemember separates from the military and their medical record is sent to St. Louis, Missouri, and therefore not available for review. 3. Permanent change of station. Servicemember is still in the military, but has transferred to another base. Medical record transfers with the servicemember. 4. Temporary duty off site. Servicemember has left military installation, but is expected to return. The temporary duty is long enough to warrant medical record to accompany servicemember. There were a few instances in which medical records could not be accounted for by the medical records department. These records were deemed to be in-scope, counted as non-responses, and not replaced in the sample. The number of servicemembers in our samples and the applicable universe of servicemembers for the OIF deployments at the installations visited are shown in table 7. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn from the sampled installations. Because each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 5 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. The 95 percent confidence intervals for percentage estimates are presented along with the estimates in figures and tables in this report. At each sampled location, we examined servicemember medical records for evidence of the following force health protection and deployment health-related documentation required by DOD’s force health protection and deployment health surveillance policies: Pre- and post-deployment health assessments, as applicable; Tuberculosis screening test (within 1 year of deployment); Pre-deployment immunizations: hepatitis A; influenza (within 1 year of deployment); measles, mumps, and rubella; polio; tetanus-diphtheria (within 10 years of deployment); and typhoid (within 5 years of deployment); and Immunizations required prior to deployment or in theater: anthrax (at least one immunization); and smallpox To provide assurances that our review of the selected medical records was accurate, we requested the installations’ medical personnel to reexamine those medical records that were missing required health assessments or immunizations and adjusted our results where documentation was subsequently identified. We also requested that installation medical personnel check all possible sources for missing pre- and post-deployment health assessments and immunizations. These sources included automated immunization sources, including the Army’s Medical Protection System (MEDPROS), the Navy’s Shipboard Non-tactical Automated Data Processing Automated Medical System (SAMS), and the Air Force’s Comprehensive Immunization Tracking Application (CITA). In those instances where we did not find a deployment health assessment, we concluded that the assessments were not completed. Our analyses of the immunization records was based on our examination of servicemembers’ permanent medical records and immunizations that were in the Army’s MEDPROS, the Navy’s SAMS, and the Air Force’s CITA. In analyzing our review results at each location, we considered documentation from all identified sources (e.g., the servicemember’s medical record, AMSA, and immunization tracking systems) in presenting data on compliance with deployment health surveillance policies. To identify whether required blood samples were drawn for servicemembers prior to and after deployments, we requested that the AMSA staff query the DOD Serum Repository to identify whether the servicemembers in our samples had a blood sample in the repository not older than 1 year prior to their deployment, and to provide the dates that post-deployment blood samples were drawn. To determine whether the services were documenting in-theater medical interventions in servicemembers’ medical records, we requested, at six of the seven installations visited for medical records review, the patient sign-in logs for in-theater medical care providers—such as the Army’s and Marine Corps’ battalion aid stations—when they were deployed to Southwest Asia in support of OIF. At the Army and Marine Corps locations, we randomly selected sick call visits from non-automated patient sign-in logs, but we randomly selected visits from the automated Global Expeditionary Medical Support (GEMS) at Moody Air Force Base and from the automated SAMs at the Naval Construction Battalion Center. We did not attempt to judge the importance of the patient visit in making our selections. For the selected patient visits, we then reviewed the servicemember’s medical record for any documentation—such as the Standard Form 600—of the servicemember’s visit to the in-theater medical care providers. To determine whether the service’s deployment health-related records were retained and maintained in a centralized location, we requested that officials at the AMSA query the AMSA database for the servicemembers included in our samples at the selected installations. For servicemembers in our samples, AMSA officials provided us with copies of deployment health assessments and immunization data found in the AMSA database. We analyzed the completeness of the AMSA database by comparing the deployment health assessments and the pre-deployment immunization data we found during our medical records review with those in the AMSA database. To identify the completeness of servicemember medical records, we then compared the data identified from the AMSA queries with the data we found during our medical records review. To determine whether DOD has established an effective quality assurance program for ensuring that the military services comply with force health protection and surveillance policies, we interviewed officials within the Deployment Health Support Directorate, the offices of the services’ Surgeons General, and at the installations we visited for medical records review about their internal management control processes. We also reviewed quality assurance policies and other documentation for ensuring compliance with force health protection and surveillance requirements. We took several steps to ensure the reliability of the data we used in our review. DOD electronic lists of servicemembers who either deployed or redeployed within certain time frames were used to generate random samples for which primary data was then collected. It was our premise that no systematic errors for inclusion or exclusion of cases in the database existed and the randomness of the sample generated controlled for those records selected for review. The final universe for which sample size was based was adjusted to account for out-of-scope cases. In addition, we took mitigating measures to (1) avoid relying exclusively on the automated databases and (2) identify and resolve inconsistencies, as described below: Personnel Deployment Databases. Because of concerns about the reliability of deployment data maintained by the Defense Manpower Data Center, we requested, in consultation with officials at the Deployment Health Support Directorate, personnel deployment data from the military installations selected for medical records review. DOD officials believed that the military installations were the most reliable sources for accurate personnel deployment data because servicemembers are deployed from, or redeployed to, these sites. However, we decided to be alert for indications of errors as we reviewed servicemember medical records and to investigate situations that appeared to be questionable. Automated Immunization Databases. Service policies require that immunizations be documented in the servicemember’s medical record. For the most part, immunizations are documented on Department of Defense Form 2766. The services also use automated immunization systems—the Army uses MEDPROS, the Air Force uses CITA, and the Navy/Marine Corps use SAMS. We did not rely exclusively on either of these sources (Department of Defense Form 2766 or automated immunization systems). For servicemembers in our samples, we reviewed both the servicemembers’ medical records and queries of the services’ automated immunization system for each servicemember. If we found documentation of the required immunizations in either source, we considered the immunization documented because it was evident that the immunization was given. AMSA Centralized Database. DOD policy requires that pre- and post-deployment health assessments be documented in the servicemember’s medical record and also that a copy be sent to AMSA for inclusion in the centralized database. We did not rely exclusively on the AMSA centralized database for determining compliance with force health protection and surveillance policies. For servicemembers in our samples, we reviewed both the servicemember’s medical record and queries of the AMSA centralized database for health assessments and immunizations for the servicemember. If we found documentation of the required pre- or post-deployment health assessments or immunizations in either source, we considered the servicemember as having met the requirement for health assessments and immunizations. Our review was performed from November 2003 through August 2004 in accordance with generally accepted government auditing standards. In addition to the individual named above, Steve Fox, Rebecca Beale, Margaret Holihan, Lynn Johnson, Susan Mason, William Mathers, Clara Mejstrik, Christopher Rice, Terry Richardson, Kristine Braaten, Grant Mallie, Jean McSween, Julia Matta, John Van Schaik, and R.K. Wild made key contributions to this report. | A lack of servicemember health and deployment data hampered investigations into the nature and causes of illnesses reported by many servicemembers following the 1990-91 Persian Gulf War. Public Law 105-85, enacted in November 1997, required the Department of Defense (DOD) to establish a system to assess the medical condition of servicemembers before and after deployments. Following its September 2003 report examining Army and Air Force compliance with DOD's force health protection and surveillance policies for Operation Enduring Freedom (OEF) and Operation Joint Guardian (OJG), GAO was asked in November 2003 to also determine (1) the extent to which the services met DOD's policies for Operation Iraqi Freedom (OIF) and, where applicable, compare results with OEF/OJG; and (2) what steps DOD has taken to establish a quality assurance program to ensure that the military services comply with force health protection and surveillance policies. Overall compliance with DOD's force health protection and surveillance policies for servicemembers that deployed in support of OIF varied by service, installation, and policy requirement. Such policies require that servicemembers be assessed before and after deploying overseas and receive certain immunizations, and that health-related documentation be maintained in a centralized location. GAO reviewed 1,862 active duty and selected reserve component servicemembers' medical records from a universe of 4,316 at selected military service installations participating in OIF. Overall, Army and Air Force compliance for sampled servicemembers for OIF appears much better compared to OEF and OJG. For example, (1) lower percentages of Army and Air Force servicemembers were missing pre- and post-deployment health assessments for OIF; (2) higher percentages of Army and Air Force servicemembers received required pre-deployment immunizations for OIF; and (3) lower percentages of deployment health-related documentation were missing in servicemembers' permanent medical records and at DOD's centralized database for OIF. The Marine Corps installations examined generally had lower levels of compliance than the other services; however, GAO did not review medical records from the Marines or Navy for OEF and OJG. Noncompliance with the requirements for health assessments may result in deployment of servicemembers with existing health problems or concerns that are unaddressed. It may also delay appropriate medical follow-up for a health problem or concern that may have arisen during or after deployment. In January 2004, DOD established an overall deployment quality assurance program for ensuring that the services comply with force health protection and surveillance policies, and implementation of the program is ongoing. DOD's quality assurance program requires (1) reporting from DOD's centralized database on each service's submission of required pre-deployment and post-deployment health assessments for deployed servicemembers, (2) reporting from each service regarding the results of the individual service's deployment quality assurance program, and (3) joint DOD and service representative reviews at selected military installations to validate the service's deployment health quality assurance reporting. DOD officials believe that their quality assurance program has improved the services' compliance with requirements. However, the services are at different stages of implementing their own quality assurance programs as mandated by DOD. At the installations visited, GAO analysts observed that the Army and Air Force had centralized quality assurance processes in place that extensively involved medical personnel examining whether DOD's force health protection and surveillance requirements were met for deploying/re-deploying servicemembers. In contrast, GAO analysts observed that the Marine Corps installations did not have well-defined quality assurance processes for ensuring that requirements were met for servicemembers. |
Federal agencies and our nation’s critical infrastructures—such as power distribution, water supply, telecommunications, national defense, and emergency services—rely extensively on computerized information systems and electronic data to carry out their missions. The security of these systems and data is essential to prevent data tampering, disruptions in critical operations, fraud, and inappropriate disclosure of sensitive information. Protecting federal computer systems and the systems that support critical infrastructures has never been more important due to escalating threats of computer security incidents, the ease of obtaining and using hacking tools, the steady advances in the sophistication and effectiveness of attack technology, and the emergence of new and more destructive attacks. Information security is a critical consideration for any organization that depends on information systems and networks to carry out its mission or business. It is especially important for federal agencies where maintaining the public trust is essential. Without proper safeguards, there is enormous risk that individuals and groups with malicious intent may intrude into inadequately protected systems and use this access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. Enacted into law on December 17, 2002, as title III of the E-Government Act of 2002, FISMA permanently authorized and strengthened information security program, evaluation, and reporting requirements. It assigns specific responsibilities to agency heads and chief information officers (CIO), IGs, NIST, and OMB. FISMA requires each agency, including agencies with national security systems, to develop, document, and implement an agencywide information security program to provide security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. Specifically, this program is to include periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; risk-based policies and procedures that cost effectively reduce information security risks to an acceptable level and ensure that information security is addressed throughout the life cycle of each information system; subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems; security awareness training for agency personnel, including contractors and other users of information systems that support the operations and assets of the agency; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in the information security policies, procedures, and practices of the agency, through plans of action and milestones; procedures for detecting, reporting, and responding to security plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. FISMA also requires each agency to annually report to OMB, selected congressional committees, and the Comptroller General on the adequacy of information security policies, procedures, and practices and compliance with requirements. In addition, agency heads are required to annually report the results of their independent evaluations to OMB, except to the extent that an evaluation pertains to a national security system; then only a summary and assessment of that portion of the evaluation is reported to OMB. Furthermore, FISMA established a requirement that each agency develop, maintain, and annually update an inventory of major information systems (including major national security systems) operated by the agency or under its control. This inventory is to include an identification of the interfaces between each system and all other systems or networks, including those not operated by or under the control of the agency. Responsibilities of the Inspectors General Under FISMA, the IG for each agency must perform an independent annual evaluation of the agency’s information security program and practices. The evaluation should include testing of the effectiveness of information security policies, procedures, and practices of a representative subset of agency systems. In addition, the evaluation must include an assessment of the compliance with the act and any related information security policies, procedures, standards, and guidelines. For agencies without an IG, evaluations of nonnational security systems must be performed by an independent external auditor. Evaluations related to national security systems are to be performed by an entity designated by the agency head. Responsibilities of the National Institute of Standards and Technology Under FISMA, NIST is tasked with developing, for systems other than national security systems, (1) standards to be used by all agencies to categorize all their information and information systems, based on the objectives of providing appropriate levels of information security, according to a range of risk levels; (2) guidelines recommending the types of information and information systems to be included in each category; and (3) minimum information security requirements for information and information systems in each category. NIST must also develop a definition of and guidelines concerning detection and handling of information security incidents as well as guidelines, developed in conjunction with the Department of Defense (DOD) and the National Security Agency, for identifying an information system as a national security system. The law also assigns other information security functions to NIST, including providing technical assistance to agencies on such elements as compliance with the standards and guidelines and the detection and handling of information security incidents; evaluating private-sector information security policies and practices and commercially available information technologies to assess potential application by agencies; evaluating security policies and practices developed for national security systems to assess their potential application by agencies; and conducting research, as needed, to determine the nature and extent of information security vulnerabilities and techniques for providing cost- effective information security. NIST is also required to prepare an annual public report on activities undertaken in the previous year and planned for the coming year. Responsibilities of the Office of Management and Budget FISMA states that the Director of OMB shall oversee agency information security policies and practices, including developing and overseeing the implementation of policies, principles, standards, and guidelines on information security; requiring agencies to identify and provide information security protections commensurate with risk and magnitude of the harm resulting from the unauthorized access, use, disclosure, disruption, modification, or destruction of information collected or maintained by or on behalf of an agency, or information systems used or operated by an agency, or by a contractor of an agency, or other organization on behalf of an agency; coordinating information security policies and procedures with related information resource management policies and procedures; overseeing agency compliance with FISMA to enforce accountability; reviewing at least annually, and approving or disapproving, agency information security programs. In addition, the act requires that OMB report to Congress no later than March 1 of each year on agency compliance with FISMA. The 24 major federal agencies continue to have significant control weaknesses in their computer systems that threaten the integrity, confidentiality, and availability of federal information and systems. In addition, these weaknesses place financial information at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. The weaknesses appear in the five major categories of information system controls (see fig. 1) defined in our audit methodology for performing information security evaluations and audits. These areas are (1) access controls, which ensure that only authorized individuals can read, alter, or delete data; (2) software change controls, which provide assurance that only authorized software programs are implemented; (3) segregation of duties, which reduces the risk that one individual can independently perform inappropriate actions without detection; (4) continuity of operations planning, which provides for the prevention of significant disruptions of computer-dependent operations, and (5) an agencywide security program, which provides the framework for ensuring that risks are understood and that effective controls are selected and properly implemented. Most agencies had weaknesses in access controls, software change controls, segregation of duties, continuity of operations, and agencywide security programs, as shown in table 1. As a result, federal information, systems, and operations were at risk of fraud, misuse, and disruption. The significance of these weaknesses has led us to continue to report information security as a material weakness in our audit of the fiscal year 2004 financial statements of the U.S. government and to continue to include it in our high risk list. In the 24 major agencies’ fiscal year 2004 reporting regarding their financial systems, 10 reported information security as a material weakness and 12 reported it as a reportable condition. Our audits also identified similar weaknesses in nonfinancial systems. In our prior reports, listed in the Related GAO Products section, we have made specific recommendations to the agencies to mitigate identified information security weaknesses. The IGs have also made specific recommendations as part of their information security review work. A basic management control objective for any organization is to protect data supporting its critical operations from unauthorized access, which could lead to improper modification, disclosure, or deletion of the data. As detailed in our methodology for performing information security audits, organizations accomplish this by designing and implementing controls that are intended to prevent, limit, and detect access to computing resources (computers, networks, programs, and data), thereby protecting these resources from unauthorized use, modification, loss, and disclosure. Access controls can be both electronic and physical. Electronic access controls include control of user accounts, use of passwords, and assignment of user rights. Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. These controls involve restricting physical access to computer resources, usually by limiting access to the buildings and rooms in which they are housed. Physical control measures may include guards, badges, and locks, used alone or in combination. Our analysis of IG, agency, and GAO reports has shown that agencies have not always effectively implemented controls to allow only authorized individuals to read, alter, or delete data. Twenty-three of 24 major agencies had access control weaknesses. We identified weaknesses in controls such as user accounts, passwords, and access rights. For example, users created passwords that were common words. Using such words as passwords increases the possibility that an attacker could guess the password and gain access to the account. Also, agencies did not always deactivate unused accounts to prevent them from being exploited by malicious users. In addition, agencies have weaknesses in the controls that prevent unauthorized access to their networks. For example, at one agency, we found an excessive number of connections to the Internet. Each such connection could provide a path for an attacker into the agency’s network. Agencies often lacked effective physical barriers to access, including locked doors, visitor screening, and effective use of access cards. Inadequate access controls diminish the reliability of computerized data and increase the risk of unauthorized disclosure, modification, and use. As a result, critical information held by the federal government is at heightened risk of access by unauthorized persons—individuals who could obtain personal data (such as taxpayer information) to perpetrate identity theft and commit financial crimes. Software change controls ensure that only authorized and fully tested software is placed in operation. These controls, which also limit and monitor access to powerful programs and sensitive files associated with computer operations, are important in providing reasonable assurance that access controls are not compromised and that the system will not be impaired. These policies, procedures, and techniques help ensure that all programs and program modifications are properly authorized, tested, and approved. Failure to implement these controls increases the risk that unauthorized programs or changes could be, inadvertently or deliberately, placed into operation. Our analysis revealed that 22 of the major agencies had weaknesses in software change controls. Weaknesses in this area included the failure to ensure that software was updated correctly and that changes to computer systems were properly approved. In addition, approval, testing, and implementation documentation for changes were not always properly maintained. Consequently, there is an increased risk that programming errors or deliberate execution of unauthorized programs could compromise security controls, corrupt data, or disrupt computer operations. Segregation of duties refers to the policies, procedures, and organizational structure that helps ensure that one individual cannot independently control all key aspects of a process or computer-related operation and, thereby, conduct unauthorized actions or gain unauthorized access to assets or records. Proper segregation of duties is achieved by dividing responsibilities among two or more individuals or organizational groups. Dividing duties among individuals or groups diminishes the likelihood that errors and wrongful acts will go undetected because the activities of one individual or group will serve as a check on the activities of the other. Without adequate segregation of duties, there is an increased risk that erroneous or fraudulent transactions can be processed, improper program changes implemented, and computer resources damaged or destroyed. Fourteen agencies had weaknesses regarding segregation of information technology duties. Agencies did not always segregate duties for system administration from duties relating to security administration. For example, individuals at certain agencies could add fictitious users to a system with elevated access privileges and perform unauthorized activities without detection. As a result, these agencies may be exposed to an increased risk of fraud and loss. An organization must take steps to ensure that it is adequately prepared to cope with the loss of operational capabilities due to earthquake, fire, accident, sabotage, or any other disruption. An essential element in preparing for such catastrophes is an up-to-date, detailed, and fully tested continuity of operations plan. Such a plan should cover all key computer operations and should include planning for business continuity. This plan is essential for helping to ensure that critical information systems, operations, and data such as financial processing and related records can be properly restored if a disaster occurred. To ensure that the plan is complete and fully understood by all key staff, it should be tested, including surprise tests, and test plans and results documented to provide a basis for improvement. If continuity of operations controls are inadequate, even relatively minor interruptions can result in lost or incorrectly processed data, which can cause financial losses, expensive recovery efforts, and inaccurate or incomplete mission-critical information. Most agencies did not have adequate continuity of operations planning. Twenty of the 24 major agencies had weaknesses in this area. In our April 2005 report on federal continuity of operations plans, we determined that agencies had not developed plans that addressed all the necessary elements. For example, fewer than half the plans reviewed contained adequate contact information for emergency communications. Few plans documented the location of all vital records for the agencies, or methods of updating those records in an emergency. Further, most of the agencies had not conducted tests, training, or exercises frequently enough to have assurance that the plan would work in an emergency. Losing the capability to process, retrieve, and protect information maintained electronically can significantly affect an agency’s ability to accomplish its mission. The underlying cause for the information security weaknesses identified at federal agencies is that they have not yet fully implemented agencywide information security programs. An agencywide security program provides a framework and continuing cycle of activity for managing risk, developing security policies, assigning responsibilities, and monitoring the adequacy of the entity’s computer-related controls. Without a well-designed program, security controls may be inadequate; responsibilities may be unclear, misunderstood, and improperly implemented; and controls may be inconsistently applied. Such conditions may lead to insufficient protection of sensitive or critical resources and disproportionately high expenditures for controls over low-risk resources. Our analysis has shown that none of the 24 major agencies had fully implemented agencywide information security programs. Agencies often did not adequately assess risks, develop sufficient risk-based policies or procedures for information security, ensure that existing policies and procedures were implemented effectively, or monitor operations to ensure compliance and determine the effectiveness of existing controls. For example, our report on wireless networking at federal agencies revealed that the majority of agencies had not yet identified and responded to the security implications of this emerging technology at their facilities. Agencies had not developed policies and procedures for wireless technology, including configuration requirements, monitoring and compliance controls, or training requirements. Agencies are also not applying information security program requirements to emerging threats, such as spam, phishing, and spyware, which pose security risks to federal information systems. Spam consumes significant resources and is used as a delivery mechanism for other types of cyber attacks; phishing can lead to identity theft, loss of sensitive information, and use of electronic government services; and spyware can capture and release sensitive data, make unauthorized changes to software, and decrease system performance. The blending of these threats creates additional risks that cannot be easily mitigated with currently available tools. Until agencies effectively and fully implement agencywide information security programs, federal data and systems will not be adequately safeguarded against unauthorized use, disclosure, and modification. Many of the weaknesses discussed have been pervasive for years; our reports attribute them to ineffective security program management—a void that FISMA was enacted to address. FISMA provides a comprehensive framework for developing effective agencywide information security programs. Its provisions create a cycle of risk management activities necessary for effective security program management and include requirements for agencies, IGs, NIST, and OMB. The government is progressing in its implementation of the information security management requirements of FISMA, but challenges remain. For example, although the agencies report progress in implementing the provisions of the act, many agencies do not have complete, accurate inventories as required. While the IGs have conducted annual evaluations of the agencies’ information security programs as required, the lack of a commonly accepted framework for their evaluations has created issues with consistency and comparability. NIST, however, has developed a schedule for its required activities and has begun to issue required guidance, and OMB has issued guidance on the roles and responsibilities of both the agencies and NIST and has also issued annual reporting guidance and reported annually, as required, to the Congress. Our analysis of the annual reporting guidance identified opportunities to increase the usefulness of the reports for oversight. FISMA details requirements for the agencies to fulfill in order to develop a strong agencywide information security program. These key requirements are shown in figure 2. A detailed discussion of each of the requirements follows. As part of the agencywide information security program required for each agency, FISMA mandates that agencies assess the risk and magnitude of the harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of their information and information systems. Risk assessment is the first process in the risk management process, and organizations use risk assessment to determine the extent of the potential threat to information and information systems and the risk associated with an information technology system throughout its systems development life cycle. Risk assessments help ensure that the greatest risks have been identified and addressed, increase the understanding of risk, and provide support for needed controls. The Federal Information Processing Standard (FIPS) 199, Standards for Security Categorization of Federal Information and Information Systems and related NIST guidance provide a common framework for categorizing systems according to risk. The framework establishes three levels of potential impact on organizational operations, assets, or individuals should a breach of security occur—high (severe or catastrophic), moderate (serious), and low (limited)—and are used to determine the impact for each of the FISMA-specified security objectives of confidentiality, integrity, and availability. Once determined, security categories are to be used in conjunction with vulnerability and threat information in assessing the risk to an organization. For fiscal year 2003 FISMA reporting, OMB required agencies to provide the number and percentage of systems assessed for risk. In fiscal year 2003, half of the 24 major agencies reported assessing the level of risk for 90 to 100 percent of their systems. In addition, our review of 4 agencies’ processes for authorizing their systems found that only 72 percent of the 32 systems we reviewed had current risk assessments. Furthermore, we identified one large federal agency that did not have risk assessments for many of its systems. In fiscal year 2004, agencies were not required by OMB to report on the percentage of systems with risk assessments in their FISMA reports; therefore, information on agencies’ performance in this area since 2003 is not readily available. FISMA requires agencies to include risk-based policies and procedures that cost-effectively reduce information security risks to an acceptable level and ensure that information security is addressed throughout the life cycle of each information system in their information security programs. These policies include determining security control costs and developing minimally acceptable system configuration requirements. To indicate implementation of the security cost-benefit provisions in FISMA, OMB requires that agencies’ budget submissions specifically identify and integrate security costs as part of life-cycle costs for their information technology investments. It has also provided criteria to be considered in determining such costs and requires that the agencies report the number of their systems that have security control costs integrated into their system life cycles. Fiscal year 2004 data for this measure showed that agencies are reporting increases in integrating the cost of security controls into the life cycle of their systems. Specifically, 19 agencies reported integrating security control costs for 90 percent or more of their systems. This represents an increase from 9 agencies in 2003. Governmentwide, OMB reported that 85 percent of agencies’ systems had security costs built into the life cycle of the system, an increase of 8 percent from fiscal year 2003. If agencies do not plan for security costs in the life cycle of their systems, they may not allocate adequate resources to ensure ongoing security for federal information and information systems. FISMA requires each agency to have policies and procedures that ensure compliance with minimally acceptable system configuration requirements, as determined by the agency. In fiscal year 2004, for the first time, agencies reported on the degree to which they had implemented security configurations for specific operating systems and software applications. Our analysis of the 2004 agency FISMA reports found that 20 agencies reported that they had implemented agencywide policies containing detailed, specific system configurations. However, these agencies did not necessarily have minimally acceptable system configuration requirements for operating systems and software applications that they were running. Specifically, some agencies reported having system configurations, but they did not always implement them on their systems. Of the remaining 4 agencies, 1 reported that it did not have system configurations, and 3 agencies provided insufficient data to determine their status for this measure. FISMA requires that agencywide information security programs include subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate. These plans are commonly referred to as system security plans. According to NIST guidance, the purpose of these plans is to (1) provide an overview of the security requirements of the system and describe the controls in place or planned for meeting those requirements and (2) delineate the responsibilities and expected behavior of all individuals who access the system. In fiscal year 2003, federal agencies reported that they had developed system security plans for 73 percent of agency systems. Although OMB did not require agencies to report on this measure for fiscal year 2004, analysis of the IG FISMA reports for that year revealed that agencies had weaknesses in their system security plans. For example, IGs noted instances where security plans were not developed for all systems or applications. Other weaknesses included plans that were not updated after the systems were significantly modified. Without current, complete system security plans, agencies cannot be assured that vulnerabilities have been mitigated to acceptable levels. FISMA requires agencies to provide security awareness training to inform personnel, including contractors and other users of information systems that support the operations and assets of the agency, of information security risks associated with their activities and their responsibilities in complying with agency policies and procedures designed to reduce these risks. In addition, agencies are required to provide appropriate training on information security to personnel with significant security responsibilities. Agencies reported the number and percentage of employees and contractors who received information security awareness training and the number and percentage of employees with significant security responsibilities who received specialized training. Our analysis found that agencies were reporting increases in the number and percentages of employees and contractors who have received security awareness training, but many of the agencies reported a decline in the percentage of employees with significant security responsibilities who have received specialized training. For example, 18 of the 24 major agencies reported increasing percentages of employees and contractors who received security awareness training in fiscal year 2004. Furthermore, all 24 agencies reported that they provided security awareness training to 60 percent or more of their employees and contractors for fiscal year 2004, up from 19 agencies in fiscal year 2003. Similarly, 17 agencies reported that they provided security awareness training for 90 percent or more of their employees, an increase from 13 agencies in 2003 (see fig. 3). However, the governmentwide percentage of employees with significant security responsibilities receiving specialized training decreased from 85 to 81 percent in fiscal year 2004. More specifically, 10 agencies reported decreases in this performance measure. Figure 4 shows the fiscal year 2004 results for this area. Failure to provide up-to-date information security awareness training could contribute to the information security problems at agencies. For example, in our report on wireless networks, we determined that the majority of agencies did not address wireless security issues in security awareness training. As a result, their employees may not have been aware of the security risks when they set up unauthorized wireless networks. FISMA requires that agency information security programs include periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices to be performed with a frequency that depends on risk, but no less than annually. This is to include testing of management, operational, and technical controls of every information system identified in the FISMA-required inventory of major systems. Periodically evaluating the effectiveness of security policies and controls and acting to address any identified weaknesses are fundamental activities that allow an organization to manage its information security risks proactively, rather than reacting to individual problems ad hoc only after a violation has been detected or an audit finding has been reported. Further, management control testing and evaluation as part of program reviews is an additional source of information that can be considered along with control testing and evaluation in IG and other independent audits to help provide a more complete picture of the agencies’ security postures. OMB requires that agencies report the number of systems annually for which security controls have been reviewed. In 2004, 23 agencies reported that they had reviewed 90 percent or more of their systems, as compared to only 11 agencies in 2003 that were able to report those numbers (see fig. 5). However, agencies have not reported the same progress in addressing reviews of contractor operations. Even though the overall average of contractor operations reviewed for the 24 major agencies increased slightly to 83 percent in fiscal year 2004, 8 agencies reported reviewing less than 60 percent of their contractor operations (see fig. 6). As a result, agencies cannot be assured that federal information and information systems managed by contractors are protected in accordance with agency policies. Our recent report on the oversight of contractor operations indicated that the methods that agencies are using to ensure information security oversight have limitations and need strengthening. For example, most agencies have not incorporated FISMA requirements, such as annual testing of controls, into their contract language. Additionally, most of the 24 major agencies reported having policies for contractors and users with privileged access to federal data and systems; however, our analysis of submitted agency policies found that only 5 agencies had established specific information security oversight policies. Finally, while the majority of agencies reported using a NIST self-assessment tool to review contractor security capabilities, only 10 agencies reported using the tool to assess users with privileged access to federal data and systems, which may expose federal data to increased risk. Another requirement of FISMA is that agencies’ information security programs include a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in information security policies, procedures, and practices. Developing effective corrective action plans is key to ensuring that remedial action is taken to address significant deficiencies. These remediation plans, called plans of action and milestones by OMB, are to list the weaknesses and show estimated resource needs or other challenges to resolving them, key milestones and completion dates, and the status of corrective actions. OMB requires agencies to report whether they have a remediation plan for all programs and systems where a security weakness has been identified. OMB also requested that IGs assess whether the agency has developed, implemented, and managed an agencywide process for these plans. According to the IGs’ assessments of their agencies’ remediation processes, 14 of the 24 major agencies did not almost always incorporate information security weaknesses for all systems into their remediation plans. The IGs also reported that 13 agencies did not use the remediation process to prioritize information security weaknesses more than 95 percent of the time to help ensure that significant weaknesses are addressed in an efficient and timely manner. Without a sound remediation process, agencies cannot efficiently and effectively correct weaknesses in their information security programs. Although even strong controls may not block all intrusions and misuse, organizations can reduce the risks associated with such events if they take steps to detect and respond to them before significant damage occurs. Accounting for and analyzing security problems and incidents are also effective ways for an organization to gain a better understanding of threats to its information and of the cost of its security-related problems. Such analyses can also pinpoint vulnerabilities that need to be addressed to help ensure that they will not be exploited again. Problem and incident reports can, therefore, provide valuable input for risk assessments, help in prioritizing security improvement, and be used to illustrate risks and related trends in reports to senior management. FISMA requires that agencies’ information security programs include procedures for detecting, reporting, and responding to security incidents; mitigating risks associated with such incidents before substantial damage is done; and notifying and consulting with the information security incident center and other entities, as appropriate, including law enforcement agencies and relevant IGs. NIST has provided guidance to assist organizations in establishing computer security incident-response capabilities and in handling incidents efficiently and effectively. OMB requires agencies to report information related to security incident reporting. This information includes whether the agency follows documented policies and procedures for reporting incidents internally, externally to law enforcement, and to the United States Computer Emergency Readiness Team (US-CERT). Information reported for this requirement varied widely across the agencies. Some agencies reported relatively few incidents internally (fewer than 10), while others reported as many as 600,000 incidents. Half (12 of 24) of the major agencies’ CIOs stated that they reported between 90 and 100 percent of incidents to US-CERT. One agency reported between 75 and 89 percent of incidents to US-CERT. The other agencies said that they reported 49 percent or fewer of their incidents to US-CERT or provided information that was not comparable. OMB stated in its March 1, 2005, FISMA report that it was concerned that very low numbers of incidents were being reported to US-CERT. Our work in this area also indicated that agencies were not consistently reporting security incidents. Without adequate reporting, the federal government cannot be fully aware of possible threats. FISMA requires that agencywide information security programs include plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. Contingency plans provide specific instructions for restoring critical systems, including such elements as arrangements for alternative processing facilities in case the usual facilities are significantly damaged or cannot be accessed due to unexpected events such as temporary power failure, accidental loss of files, or a major disaster. It is important that these plans be clearly documented, communicated to potentially affected staff, and updated to reflect current operations. The testing of contingency plans is essential to determining whether the plans will function as intended in an emergency situation. The most useful tests involve simulating a disaster situation to test overall service continuity. Such a test would include testing whether the alternative data processing site will function as intended and whether critical computer data and programs recovered from off-site storage are accessible and current. In executing the plan, managers will be able to identify weaknesses and make changes accordingly. Moreover, tests will assess how well employees have been trained to carry out their roles and responsibilities in a disaster situation. To show the status of implementing this requirement, OMB required that agencies report the percentage of systems that have a contingency plan and the percentage that have contingency plans that have been tested. Overall, federal agencies reported that 57 percent of their systems had contingency plans that had been tested. Although 19 agencies reported increases in the testing of contingency plans, 6 agencies reported that less than 50 percent of their systems had tested contingency plans (see fig. 7). Also, three agencies reported having contingency plans for all their systems and only 1 reported testing the plans for all their systems. Without testing, agencies have limited assurance that they will be able to recover mission- critical applications, business processes, and information in the event of an unexpected interruption. FISMA also requires that each agency develop, maintain, and annually update an inventory of major information systems operated by the agency or under its control. A complete and accurate inventory of major information systems is a key element of managing the agency’s information technology resources, including the security of those resources. The inventory is used to track the agency systems for annual testing and evaluation and contingency planning. In addition, the total number of agency systems is a key element in OMB’s performance measures, in that agency progress is indicated by the percentage of total systems that meet specific information security requirements. Thus, inaccurate or incomplete data on the total number of agency systems affect the percentage of systems shown as meeting the requirements. In fiscal year 2004 FISMA reports, 20 of the 24 major agencies reported having complete, accurate inventories that were updated at least annually. There was disagreement among the agencies and IGs regarding the accuracy of the number of programs, systems, and contractor operations or facilities. For instance, although 20 agencies reported having inventories that were updated at least annually, only 8 IGs agreed with the accuracy of those inventories. Without complete, accurate inventories, agencies cannot efficiently maintain and secure their systems. Moreover, the performance measures that are stated as a percentage of systems, including systems and contractor operations reviewed annually, continuity plans tested, and certification and accreditation, may not accurately reflect the extent to which these security practices have been implemented. In addition to the FISMA requirements, OMB requires agencies to report on their certification and accreditation process. Certification and accreditation is the requirement that agency management officials formally authorize their information systems to process information; thereby accepting the risk associated with their operation. This management authorization (accreditation) is to be supported by a formal technical evaluation (certification) of the management, operational, and technical controls established in an information system’s security plan. This process is not included in FISMA but does include statutory requirements such as risk assessments and security plans. Therefore, OMB eliminated separate reporting requirements for risk assessments and security plans. For annual reporting, OMB requires agencies to report the number of systems authorized for processing after completing certification and accreditation. For fiscal year 2004, OMB’s guidance also requested that IGs assess their agencies’ certification and accreditation process. Data reported for this measure showed overall increases for most agencies. According to OMB, 77 percent of government systems had undergone certification and accreditation for fiscal year 2004. For example, 19 of the 24 major agencies reported increasing percentages from fiscal year 2003 to fiscal year 2004. In addition, 17 agencies reported percentages of systems certified and accredited at or above 90 percent (see fig. 8). Although agencies have reported progress in certifying and accrediting their systems, weaknesses in the process remain. In a previously issued report, we determined that agencies were unclear on the number of systems that undergo the process, were inconsistent in their reporting of certification and accreditation performance data, and lacked quality assurance policies and procedures relating to the certification and accreditation process. The IGs also reported weaknesses in the certification and accreditation process in their fiscal year 2004 FISMA reports. For example, IGs reported systems that did not have formal authorization to operate or were missing critical elements such as security plans, risk assessments, and contingency plans. Furthermore, OMB’s March 2005 report to Congress noted that seven IGs rated their agencies’ certification and accreditation process as poor. Therefore, agencies’ reported data may not accurately reflect the status of an agency’s implementation of this requirement. FISMA requires the IGs to perform an independent evaluation of the information security program and practices of the agency to determine the effectiveness of such programs and practices. Each evaluation should include (1) testing of the effectiveness of information security policies, procedures, and practices of a representative subset of the agency’s information systems and (2) assessing compliance (based on the results of the testing) with FISMA requirements and related information security policies, procedures, standards, and guidelines. The IGs have conducted annual evaluations as required and have reported on the results. However, they do not have a common approach to the annual evaluations. As a result, IGs may not be performing their evaluations with peak effectiveness, efficiency, and adequate quality control. A commonly accepted framework or methodology for the FISMA independent evaluations could provide improved effectiveness, increased efficiency, quality control, and consistency of application. Such a framework may provide improved effectiveness of the annual evaluations by ensuring that compliance with FISMA and all related guidance, laws, and regulations is considered in the performance of the evaluation. IGs may be able to use the framework to be more efficient by focusing evaluative procedures on areas of higher risk and by following an integrated approach designed to gather evidence efficiently. A commonly accepted framework may offer quality control by providing a standardized methodology that can be followed by all personnel. Finally, IGs may obtain consistency of application through a documented methodology. A commonly accepted framework for performing the annual FISMA evaluation could offer additional benefits as well. For example, it might allow the IGs to coordinate on information security issues, weaknesses, and initiatives that cross agency lines. It could also facilitate appropriate coverage of major federal contractors who serve multiple federal agencies. Such a framework could provide assistance to the smaller IG offices by allowing them to leverage lessons learned by larger IG offices, for example, through the development and use of model statements of work for FISMA contracts. Finally, the usefulness and comparability of the IGs’ annual evaluations for oversight bodies may be improved by the adoption of a framework for the FISMA independent evaluations. The current inconsistencies in methodology affect the consistency and comparability of reported results. As a result, the usefulness of the IG reviews for assessing the governmentwide information security posture is potentially reduced. The President’s Council on Integrity and Efficiency has recognized the importance of having a framework and is working to develop one for FISMA reviews. The Council is including both OMB and us in its deliberations. The Council, which currently maintains The Financial Audit Manual, a commonly accepted framework for the performance of government financial audits, brings expertise and experience to the development of a FISMA evaluation framework. NIST has developed a plan for releasing important guidance for the agencies and fulfilling its other responsibilities under FISMA. NIST is required, among other things, to issue guidance on information security policies and practices for the agencies, provide technical assistance, conduct research as needed in information security, and assist in the development of standards for national security systems. After FISMA was enacted, NIST developed the FISMA Implementation Project to enable it to fulfill its statutory requirements in a timely manner. The project is divided into three phases. Phase I focuses on the development of a suite of security standards and guidelines required by FISMA as well as other FISMA-related publications necessary to create a robust information security program and effectively manage risk to agency operations and agency assets. NIST has already issued one FIPS, which covers the categorization of systems according to risk. A second FIPS concerning the minimum security requirements for each risk category is due out soon. NIST has also issued guidance to assist the agencies in determining the correct risk level for systems and mapping the systems to the correct categories. This stage is due to be completed in 2006. The status of the guidance is shown in figure 9. Phase II will focus on the development of a program for accrediting public and private sector organizations to conduct security certification services for federal agencies, as part of agencies’ certification and accreditation requirements. Organizations that participate in the organizational accreditation program can demonstrate competency in the application of NIST security standards and guidelines. NIST states that developing a network of accredited organizations with demonstrated competence in the provision of security certification services will give federal agencies greater confidence in the acquisition and use of such services. Phase II is planned for fiscal year 2006. Phase III is the development of a program for validating security tools. The program will rely on private sector, accredited testing laboratories to conduct evaluations of the security tools. NIST will provide validation services and laboratory oversight. Implementation of this phase is also planned for fiscal year 2006. The agency has also made progress in implementing other requirements. For example, it is continuing to provide consultative services to agencies on FISMA-related information security issues and has established a Web site for federal agencies to identify, evaluate, and disseminate best practices for critical infrastructure protection and security. In addition, it has established a Web site for the private sector to share nonfederal information security practices. NIST has continued an ongoing dialogue with the National Security Agency and the Committee on National Security Systems to coordinate and take advantage of the security work these entities have under way within the federal government. In addition to the specific responsibilities to develop standards and guidance, other information security activities undertaken by NIST include operating a computer security expert assist team to assist federal agencies in identifying and resolving security problems; conducting security research in areas such as access control, wireless, mobile agents, smart cards, and quantum computing; improving the security of control systems that manage key elements of the country’s critical infrastructure; and performing cyber security product certifications required for government procurements. Finally, NIST issued its annual status reports as required by FISMA in April of 2003 and 2004. According to FISMA, the Director of OMB is responsible for developing and overseeing the implementation of information security at the agencies. OMB reported that it has used the information gathered under this act to assist it in focusing its attention and resources on poorly performing agencies. To oversee the implementation of policies and practices relating to information security, OMB has issued guidance to the agencies on their requirements under FISMA. In its annual memorandum on reporting, it instructed agencies that the use of NIST standards and guidance was required. OMB has updated its budget guidance to gather data on information security at the agencies. For example, it asks the agencies to estimate a percentage of the total investment in information technology that is associated with security. Agencies are asked to consider the products, procedures, and personnel that are dedicated primarily to provision of security. These procedures include FISMA requirements, such as risk assessments, security plans, education and training, system reviews, remedial plans, contingency planning and testing, and reviews or inspections of contractor operations. To oversee agency compliance with FISMA, OMB relies on annual reporting by the agencies and the IGs. It reported the results of this annual reporting to Congress by March 1 in 2004 and 2005, as required by FISMA. In these reports, it evaluated the agencies’ reported data against performance measures it had developed. On August 23, 2004, OMB issued its fiscal year 2004 reporting instructions. The reporting instructions, similar to the 2003 instructions, emphasized a strong focus on performance measures and formatted these instructions to emphasize a quantitative, rather than a narrative, response. OMB stated that it is using a combination of sources to fulfill its requirement under FISMA to annually approve or disapprove of agencies’ information security programs; some information is taken from security and privacy information submitted by the agencies during the budget process, and other information comes from the annual reporting. Periodic reporting of performance measures for FISMA requirements and related analysis provides valuable information on the status and progress of agency efforts to implement effective security management programs. However, as we have recently testified, our analysis of OMB’s annual reporting guidance identified areas where additional reporting requirements would increase usefulness of annual reports for oversight. These areas include reporting on the quality of agency processes, risk- based reporting of data, including key FISMA requirements, and ensuring clarity. Limited Assurance of the Quality of Agency Processes Current performance measures offer limited assurance of the quality of agency processes that implement key security policies, controls, and practices. For example, for the annual review process, agencies report the number of agency systems and contractor operations they reviewed. They also report on, and the IGs confirm, whether they used appropriate guidance. However, reporting on the quality of the reviews, such as whether guidance was applied correctly or if results were tracked for remediation, is not required. Moreover, as mentioned previously, our work in this area revealed that the methods agencies were using for the reviews had limitations and needed strengthening. Providing information on the quality of the review process would further enhance the usefulness of the annually reported data in this area for management and oversight purposes. OMB has recognized the need for assurance of quality for agency processes. For example, it specifically requested that the IGs evaluate the plan of action and milestones process and the certification and accreditation process at their agencies. The results of these evaluations call into question the reliability and quality of the data reported by several agencies. Therefore, increased risk exists that the performance data reported by the agencies may not accurately reflect the status of agencies’ implementation of these information security activities. Data Not Reported According to System Risk Performance measurement data are reported on the total number of agency systems but do not indicate the assessed level of risk of those systems. Reporting by system risk could provide information about whether agencies are prioritizing their information security efforts according to risk. For example, the performance measures for fiscal year 2004 show that 57 percent of the total number of systems have tested contingency plans, but do not indicate to what extent this 57 percent includes the agencies’ high or moderate risk systems. Therefore, agencies, the administration, and Congress cannot be sure that critical federal operations can be restored if an unexpected event disrupts service. Reporting Does Not Include Aspects of Key Requirements Currently, OMB reporting guidance and performance measures do not include separate and complete reporting on FISMA requirements. For example, FISMA requires agencies to have procedures for detecting, reporting, and responding to security incidents. Currently, the annual reporting developed by OMB focuses on incident reporting: how the agencies are reporting their incidents internally to law enforcement and to the US-CERT. Although incident reporting is an important aspect of incident handling, it is only one part of the process. Additional questions that cover incident detection and response activities would be useful to oversight bodies in determining the extent to which agencies have implemented capabilities for managing security incidents. Reporting on the remediation process does not include a key aspect of this process. Current reporting guidance asks about the inclusiveness of the plans, i.e. whether all known information security weaknesses are included; however, if and how weaknesses are mitigated is not reported. For example, the agencies do not report what percentage of existing weaknesses they have remedied during the year. In addition, agencies do not report the risk level of the systems on which the weaknesses are found. Valuable information may be provided to oversight bodies by posing additional questions on the remediation process. The annual reporting process also does not include separate reporting on certain FISMA requirements. For example, in the 2004 guidance, OMB eliminated separate reporting on risk assessments and security plans. Because the guidance on the certification and accreditation process required both risk assessments and security plans, OMB did not require agencies to answer separate questions in these areas. Although OMB did ask for the IGs’ assessments of the certification and accreditation process, it did not require them to comment separately on these specific requirements. As a result, agency management, Congress, and OMB do not have complete information on the status of agencies’ implementation efforts for these requirements. Several questions in OMB’s 2004 reporting guidance could be subject to differing interpretations by IGs and the agencies. For example, one of the questions asked the IGs whether they and their agency used the plan of actions and milestones as a definitive management tool; however, IGs are not required to use these plans. Therefore, a negative answer to this question could mean either that the agency and the IG were not using the plan, or that one of them was not using the plan. As a result, it may erroneously appear that agencies were not using the plans as the major management tool for remediation of identified weaknesses as required by OMB. Another example of differing interpretations was one of the inventory questions. It asked if the IG and agency agreed on the number of programs, systems, and contractor operations in the inventory. Since the question could be interpreted two ways, the meaning of the response was unclear. For example, if an IG replied in the negative, it could mean that while the IG agreed with the total numbers in the inventory, it disagreed with how the agency identified whether the inventory entry was a program, system, or contractor operations. Alternatively, a negative response could mean that the IG disagreed with the overall accuracy of the inventory. Additional questions in the areas of configuration management and certification and accreditation also generated confusion. As a result, unclear reporting instructions may have decreased the reliability and consistency of reported performance data. Federal agencies have not consistently implemented effective information security policies and practices. As a result, pervasive weaknesses exist in almost all areas of information security controls. These weaknesses place federal operations and assets at risk of fraud, misuse, and abuse, and may put financial data at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. In our prior reports, as well as in reports by the IGs, specific recommendations were made to the agencies to mitigate identified information security weaknesses. The government is progressing in implementing FISMA requirements; the agencies, IGs, NIST, and OMB have all made advances in fulfilling their requirements. However, current reporting under FISMA by the agencies produces performance data that may not accurately reflect the status of agencies’ implementation of required information security policies and procedures. Oversight entities are not able to determine from the reports a true or complete picture of the adequacy and effectiveness of agencies’ information security programs. However, opportunities exist to improve reporting guidance that might lead to more useful and complete information on the implementation of agencies’ information security programs. Until such information is available, there is little assurance that the pervasive weaknesses in agencywide information security programs are being addressed. We recommend that the Director of OMB take the following four actions in revising future FISMA reporting guidance: request the inspectors general to report on the quality of additional agency processes, such as the annual system reviews; require agencies to report FISMA data by risk category; ensure that all aspects of key FISMA requirements are reported on in the review guidance to ensure clarity of instructions. In written comments on a draft of this report (reprinted in app. II), the Administrator, Office of E-Government and Information Technology, OMB, agreed with our overall assessment of information security at the agencies, but disagreed with one of our recommendations to enhance FISMA reporting guidance and provided comments on the others. In addition, the Administrator made several general comments. In commenting on our recommendation that OMB guidance request that the IGs report on the quality of additional agency processes, OMB stated that their current guidance has provided the IGs with the opportunity to include supporting narrative responses for all questions and that the guidance encourages the IGs to provide any additional meaningful information they may have. We acknowledge that OMB has given the agency IGs the opportunity to include such additional information as they believe may be helpful. However, since specific information was not requested, the resulting information that was reported, if any, was not consistent or comparable across the agencies and over time. In our report, we noted that OMB has recognized the need for assurance of quality for agency processes. For example, OMB specifically requested that the IGs evaluate the plans of actions and milestones and the certification and accreditation processes at their agencies. We believe that additional processes should be assessed for quality such as the annual system review process. This would further enhance the usefulness of the annually reported data for management and oversight purposes. Regarding our recommendation to include FISMA data by risk category, OMB noted in its comments that this recommendation is now addressed by its fiscal year 2005 FISMA reporting guidance. This guidance was issued in June 2005. In responding to our recommendation to ensure that all key FISMA requirements are reported on in the annual reports, OMB disagreed with our assessment that additional sub-elements are necessary in its reporting guidance and stated that its reporting guidance satisfies all FISMA requirements through a combination of data collection and specialized questions. OMB cited as examples its performance data on agencies’ certification and accreditation processes and its questions to IGs regarding the quality of agency corrective plans of actions and milestones. In addition, it commented that its guidance complied with the remainder of FISMA’s reporting requirements by having agencies respond to specialized questions. As noted in our report, some FISMA requirements are not specifically being addressed through these means, such as reporting on risk assessments, subordinate security plans, security incident detection and response activities, and whether weaknesses are mitigated. We agree with OMB that the process of certification and accreditation requires agencies to document risk assessments and security plans. However, as stated in our report, the IGs reported the certification and accreditation processes included missing security plans, risk assessments, and contingency plans. Furthermore, seven IGs rated their agencies’ certification and accreditation processes as poor. Since the quality of the certification and accreditation processes at some agencies has been called into question by the IGs, we believe reporting separately on the risk assessments and security plans at this time may provide better information on the status of agencies’ information security implementation efforts. OMB commented on our recommendation that it review guidance to ensure clarity of instructions by stating that its staff worked with agencies and the IGs throughout the year when developing the guidance and, in particular, during the reporting period to ensure that agencies adequately understood the reporting instructions. We acknowledge OMB’s efforts to help ensure better clarity, but believe more needs to be done. As noted in this report, several questions in the guidance could be subject to differing interpretations. For example, questions in the areas of plans of actions and milestones, inventory, configuration management, and certification and accreditation generated confusion. As a result, the reported data may contain erroneous information, and its reliability and consistency could be decreased. OMB also strongly disagreed with any inference in the draft report that its reporting guidance fails to meet the requirements of FISMA. We did not make such a statement. Rather, our report provides that OMB needs to enhance its reporting guidance to the agencies so that the annual FISMA reports provide more information essential for effective oversight. Similarly, OMB commented that our report included the suggestion that, unless it asked a specific question in a particular way and agencies answered those questions once each year, agencies would not implement FISMA nor provide adequate cost-effective security for their information and systems. This characterization of our report is incorrect. We noted that specific recommendations were previously made to the agencies to remedy identified information security weaknesses. Our recommendations in this report address the need for OMB to enhance its FISMA reporting guidance to increase the effectiveness and reliability of annual reporting. Our report also emphasized the need to improve FISMA data for oversight purposes. We believe that OMB can achieve this by implementing our recommendations. We are sending copies of this report to the Director of OMB and to interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available on GAO’s Web site at http://www.gao.gov. If you have any questions or wish to discuss this report, please contact me at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In accordance with the FISMA requirement that the Comptroller General report periodically to the Congress, our objectives were to evaluate (1) the adequacy and effectiveness of agencies’ information security policies and practices and (2) implementation of FISMA requirements. To assess the adequacy and effectiveness of agencies’ information security policies and practices, we analyzed our related reports issued from the beginning of fiscal year 2003 through May of 2005. We also reviewed and analyzed the information security work and products of the IGs. Both our reports and the IGs’ products used the methodology contained in The Federal Information System Controls Audit Manual. Further, we reviewed and analyzed data on information security in federal agencies’ performance and accountability reports. To assess implementation of FISMA requirements, we reviewed and analyzed the Federal Information Security Management Act (Public Law 107-347); the 24 major federal agencies’ and Office of Inspector General FISMA reports for fiscal years 2003 and 2004, as well as the performance and accountability reports for those agencies; the Office of Management and Budget’s FISMA guidance and mandated annual reports to Congress; and the National Institute of Standards and Technology’s standards, guidance, and annual reports. We also held discussions with agency officials and the agency inspectors general to further assess the implementation of FISMA requirements. We did not include systems categorized as national security systems in our review, nor did we review the adequacy or effectiveness of the security policies and practices for those systems. Our work was conducted in Washington, D.C., from September 2004 through May 2005 in accordance with generally accepted government auditing standards. The following are GAO’s comments on OMB’s letter dated June 29, 2005. 1. As noted in our report, some FISMA requirements are not specifically being addressed by OMB’s reporting instructions, such as reporting on risk assessments, subordinate security plans, security incident detection and response activities, and whether weaknesses are mitigated. We agree with OMB that the process of certification and accreditation requires agencies to document components of security planning such as risk assessment. However, as stated in our report, the IGs reported the certification and accreditation process included missing security plans, risk assessments, and contingency plans. Furthermore, seven IGs rated their agencies’ certification and accreditation processes as poor. Since the quality of the certification and accreditation process has been called into question by some IGs, we believe that reporting separately on the components at this time may provide better information on the status of agencies’ information security implementation efforts. Also, we disagree that our report indicates that OMB’s reporting guidance fails to meet the requirements of FISMA. We did not make such a statement. Rather, our report provides that OMB needs to enhance its reporting guidance to the agencies so that the annual FISMA reports provide more information essential for effective oversight. 2. We disagree with OMB comments that our report included the suggestion that unless OMB asked a specific question in a particular way and agencies answered those questions once each year, agencies would not implement FISMA nor provide adequate cost-effective security for their information and systems. We make no such statement or suggestion. OMB also stated that responsibility and accountability for implementation and compliance with FISMA rests with the agencies, including monitoring their own performance throughout the year. As noted in our report, FISMA clearly defines separate roles and responsibilities for federal agencies and their IGs, NIST, and OMB, to provide a comprehensive framework for ensuring the effectiveness of information security controls. Therefore, we cannot fully agree with OMB’s statement that responsibility and accountability for implementation and compliance with FISMA rests with the agencies. All parties included in the act share in the responsibility. We do agree, however, that FISMA includes the requirement that agencies monitor their own performance throughout the year. 3. OMB’s reporting guidance does not specifically address the issue of the quality of agency processes used to gather information for FISMA reporting. We acknowledge that OMB has given the agency IGs the opportunity to include such additional information as they believe may be helpful. However, since specific information has not been requested, the resulting reported information has not been consistent or comparable across the agencies and over time. In our report we noted that OMB has recognized the need for assurance of quality for certain agency processes. For example, it specifically requested that the IGs evaluate the plan of actions and milestones process and the certification and accreditation process at their agencies. We believe that additional processes should be assessed for quality such as the annual system reviews. Providing information on the quality of the review process would further enhance the usefulness of the annually reported data for management and oversight purposes. 4. We acknowledge OMB’s efforts to help ensure better clarity but believe more needs to be done. As we noted in our report, several questions could be subject to differing interpretations. Questions in the areas of plans of actions and milestones, inventory, configuration management, and certification and accreditation generated confusion. As a result, the reported data may contain erroneous information, and its reliability and consistency may be decreased. 5. The guidance to report FISMA data by risk category was issued on June 13, 2005—after our draft report was provided to OMB for comment. Reporting by system risk could provide information about whether agencies are appropriately prioritizing their information security efforts. 6. In this report, we do not propose solutions to agency information security weaknesses. Rather, we reported that pervasive weaknesses in federal agencies’ information security policies and practices place data at risk. This statement is supported by our prior reports and reports by the IGs. We noted that, in those prior reports, specific recommendations were made to the agencies to remedy identified information security weaknesses. In this report, we recommended that OMB enhance FISMA reporting guidance to increase the effectiveness and reliability of annual reporting. Larry Crosland, Season Dietrich, Nancy Glover, Carol Langelier, Suzanne Lightman, and Stephanie Lee made key contributions to this report. Information Security: Federal Deposit Insurance Corporation Needs to Sustain Progress. GAO-05-486. Washington, D.C.: May 19, 2005. Information Security: Federal Agencies Need to Improve Controls Over Wireless Networks. GAO-05-383. Washington, D.C.: May 17, 2005. Information Security: Emerging Cybersecurity Issues Threaten Federal Information Systems. GAO-05-231. Washington, D.C.: May 13, 2005. Continuity of Operations: Agency Plans Have Improved, but Better Oversight Could Assist Agencies in Preparing for Emergencies. GAO-05- 577. Washington, D.C.: April 28, 2005. Continuity of Operations: Agency Plans Have Improved, but Better Oversight Could Assist Agencies in Preparing for Emergencies. GAO-05- 619T. Washington, D.C.: April 28, 2005. Information Security: Improving Oversight of Access to Federal Systems and Data by Contractors Can Reduce Risk. GAO-05-362. Washington, D.C.: April 22, 2005. Information Security: Internal Revenue Service Needs to Remedy Serious Weaknesses over Taxpayer and Bank Secrecy Act Data. GAO-05-482. Washington, D.C.: April 15, 2005. Information Security: Department of Homeland Security Faces Challenges in Fulfilling Statutory Requirements. GAO-05-567T. Washington, D.C.: April 14, 2005. Information Security: Continued Efforts Needed to Sustain Progress in Implementing Statutory Requirements. GAO-05-483T. Washington, D.C.: April 7, 2005. Information Security: Securities and Exchange Commission Needs to Address Weak Controls over Financial and Sensitive Data. GAO-05-262. Washington, D.C.: March 23, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. Financial Management: Department of Homeland Security Faces Significant Financial Management Challenges. GAO-04-774. Washington, D.C.: July 19, 2004. Information Security: Agencies Need to Implement Consistent Processes in Authorizing Systems for Operation. GAO-04-376. Washington, D.C.: June 28, 2004. Information Technology: Training Can Be Enhanced by Greater Use of Leading Practices. GAO-04-791. Washington, D.C.: June 24, 2004. Information Security: Agencies Face Challenges in Implementing Effective Software Patch Management Processes. GAO-04-816T. Washington, D.C.: June 2, 2004. Information Security: Continued Action Needed to Improve Software Patch Management. GAO-04-706. Washington, D.C.: June 2, 2004. Information Security: Information System Controls at the Federal Deposit Insurance Corporation. GAO-04-630. Washington, D.C.: May 28, 2004. Technology Assessment: Cybersecurity for Critical Infrastructure Protection. GAO-04-321. Washington, D.C.: May 18, 2004. Continuity of Operations: Improved Planning Needed to Ensure Delivery of Essential Services. GAO-04-638T. Washington, D.C.: April 22, 2004. Critical Infrastructure Protection: Challenges and Efforts to Secure Control Systems. GAO-04-628T. Washington, D.C.: March 30, 2004. Information Security: Continued Efforts Needed to Sustain Progress in Implementing Statutory Requirements. GAO-04-483T. Washington, D.C.: March 16, 2004. Critical Infrastructure Protection: Challenges and Efforts to Secure Control Systems. GAO-04-354. Washington, D.C.: March 15, 2004. Information Security: Technologies to Secure Federal Systems. GAO-04- 467. Washington, D.C.: March 9, 2004. Continuity of Operations: Improved Planning Needed to Ensure Delivery of Essential Government Services. GAO-04-160. Washington, D.C.: February 27, 2004. Information Security: Further Efforts Needed to Address Serious Weaknesses at USDA. GAO-04-154. Washington, D.C.: January 30, 2004. Information Security: Improvements Needed in Treasury’s Security Management Program. GAO-04-77. Washington, D.C.: November 14, 2003. Information Security: Computer Controls over Key Treasury Internet Payment System. GAO-03-837. Washington, D.C.: July 30, 2003. Information Security: Further Efforts Needed to Fully Implement Statutory Requirements in DOD. GAO-03-1037T. Washington, D.C.: July 24, 2003. Information Security: Continued Efforts Needed to Fully Implement Statutory Requirements. GAO-03-852T. Washington, D.C.: June 24, 2003. Information Security: Progress Made, but Weaknesses at the Internal Revenue Service Continue to Pose Risks. GAO-03-44. Washington, D.C.: May 30, 2003. High-Risk Series: An Update. GAO-03-119. Washington, D.C.: January 2003. Computer Security: Progress Made, But Critical Federal Operations and Assets Remain at Risk. GAO-03-303T. Washington, D.C.: November 19, 2002. | Federal agencies rely extensively on computerized information systems and electronic data to carry out their missions. The security of these systems and data is essential to prevent data tampering, disruptions in critical operations, fraud, and inappropriate disclosure of sensitive information. Concerned with accounts of attacks on systems via the Internet and reports of significant weaknesses in federal computer systems that make them vulnerable to attack, Congress passed the Federal Information Security Management Act (FISMA) in 2002. In accordance with FISMA requirements that the Comptroller General report periodically to the Congress, GAO's objectives in this report are to evaluate (1) the adequacy and effectiveness of agencies' information security policies and practices and (2) the federal government's implementation of FISMA requirements. Pervasive weaknesses in the 24 major agencies' information security policies and practices threaten the integrity, confidentiality, and availability of federal information and information systems. Access controls were not effectively implemented; software change controls were not always in place; segregation of duties was not consistently implemented; continuity of operations planning was often inadequate; and security programs were not fully implemented at the agencies. These weaknesses exist primarily because agencies have not yet fully implemented strong information security management programs. These weaknesses put federal operations and assets at risk of fraud, misuse, and destruction. In addition, they place financial data at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. Overall, the government is making progress in its implementation of FISMA. To provide a comprehensive framework for ensuring the effectiveness of information security controls, FISMA details requirements for federal agencies and their inspectors general (IG), the National Institute of Standards and Technology (NIST), and OMB. Federal agencies reported that they have been increasingly implementing required information security practices and procedures, although they continue to face major challenges. Further, IGs have conducted required annual evaluations, and NIST has issued required guidance in the areas of risk assessments and recommended information security controls, and has maintained its schedule for issuing remaining guidance required under FISMA. Finally, OMB has given direction to the agencies and reported to Congress as required; however, GAO's analysis of its annual reporting guidance identified opportunities to increase the usefulness of the reports for oversight. While progress has been made in implementing statutory requirements, agencies continue to have difficulty effectively protecting federal information and information systems. |
Ms. Chairwoman and Members of the Committee: We are pleased to be here today to discuss bank and thrift supervision and examination. Supervisory and examination procedures today show evidence of lessons learned from the bank and thrift crises of the 1980s and early 1990s. These procedures are the primary basis used by the federal regulatory agencies to assess the risks that banks and thrifts assume and to take actions that are needed to maintain a safe and sound banking system and protect the deposit insurance funds. A combination of regulatory and legislative changes, along with market forces, has expanded the number and scope of activities undertaken by insured depository institutions, particularly the largest ones, and thus the risks that they assume. These expanded activities include offering and/or dealing in a range of nontraditional bank products, such as mutual funds, securities, derivatives, and other off-balance sheet products. The resulting complex institutions represent a major supervisory and regulatory challenge. In keeping with the changes in the banking environment, federal bank and thrift regulators have recently announced that bank examinations will explicitly include an assessment of how effectively banks manage risk and a rating on their sensitivity to risks posed by a variety of market factors. Although we have not yet fully assessed the implementation of most of the recent changes to supervisory and examination policy, they appear to address some of our concerns about examinations in the aftermath of bank failures in the 1980s and early 1990s. Perhaps the most important—yet unanswered—question to ask in assessing changes in bank and thrift supervision is to what extent improvements in the detection of problems can help ensure that regulators take timely and forceful corrective action to prevent or minimize losses to the deposit insurance funds. describe the history of the bank and thrift crises of the late 1980s and early 1990s and the legislative response to these crises, highlight supervisory and examination weaknesses we have noted in the past and improvement efforts that have been made or are under way, and identify continuing issues. From 1980 to 1994, record losses were absorbed by the federal deposit insurance funds. In this period, nearly 1,300 thrifts failed, and 1,617 federally insured banks were closed or received FDIC financial assistance. Losses to deposit insurance funds have been estimated at about $125 billion. Banks and thrifts failed during the 1980s for several reasons. A mismatch between the income from fixed rate mortgages and the costs of borrowing funds at market rates in competition with nondepository institutions were among the reasons for large losses that led to the failure of thrifts. Banks suffered losses from defaults on loans concentrated in several industries that suffered economic downturns over the decade, including agriculture, real estate, and developing nation loans. One factor we and others cited as contributing to the problems of both thrifts and banks during this period was excessive forbearance by federal regulators. Regulators had wide discretion in choosing the severity and timing of enforcement actions to correct unsafe and unsound practices. They also had a common philosophy of trying to work informally and cooperatively with troubled institutions. In a 1991 report, we concluded that these conditions had resulted in enforcement actions that were neither timely nor forceful enough to (1) correct unsafe and unsound banking practices or (2) prevent or minimize losses to the insurance funds. The regulators themselves have recognized that their supervisory practices failed to adequately control risky practices that led to numerous thrift and bank failures. We made specific recommendations for changes to the supervisory process that would help ensure that institutions failing to operate in a safe and sound manner would be subject to timely and forceful supervisory response, including, if necessary, prompt closure. of the Federal Savings and Loan Insurance Corporation (FSLIC) and troubles in the thrift industry. In addition to creating the Savings Association Insurance Fund to replace FSLIC, FIRREA created a new thrift industry regulator with increased enforcement authority—the Office of Thrift Supervision. It also authorized FDIC to terminate a bank’s or thrift’s deposit insurance for unsafe and unsound conditions. The second law, the Federal Deposit Insurance Corporation Improvement Act of 1991 (FDICIA), was enacted, in part, because of concerns that the exercise of regulatory discretion during the 1980s did not adequately protect the safety and soundness of the banking system or minimize insurance fund losses. FDICIA contains several safety and soundness provisions based on a simple principle: if a depository institution fails to operate in a safe and sound manner, it should be subject to timely and forceful supervisory response, including, if necessary, prompt closure. Also, FDICIA requires a number of corporate governance and accounting reforms to (1) strengthen corporate governance, (2) improve financial reporting, and (3) aid early identification of safety and soundness problems. Among the corporate governance and accounting reforms, FDICIA establishes generally accepted accounting principles as the standard for all reports to regulators; requires that management and auditors annually report on the financial condition and management of the largest depository institutions, including effectiveness of and compliance with internal controls; and requires that institutions have independent audit committees composed of outside directors. In addition, FDICIA contains provisions for improving regulatory supervision. FDICIA mandates annual on-site examinations of insured banks and thrifts. Also, consistent with specific recommendations we made, it requires implementation of a “trip wire” approach to limit regulatory discretion in key areas, including capital, by mandating specific regulatory responses to safety and soundness problems. These changes, incorporated in sections 38 and 39 of the Federal Deposit Insurance Act, were intended to increase the likelihood of prompt regulatory action to prevent or minimize loss to the insurance funds. regulatory action as an institution’s capital drops to lower levels. Although this requirement should strengthen oversight in several ways, it is inherently limited as a tool for early intervention to correct problems and thus safeguard the insurance funds. This is because impaired capital levels often do not appear until after a bank has experienced problems in other areas, such as asset quality and management. Section 39 allows regulatory action before capital is impaired. However, section 39, as implemented, appears to do little to reduce regulatory discretion. The implementing guidelines and regulations did not (1) establish clear and specific definitions of unsound conditions and practices or (2) link such conditions or practices to specific mandatory regulatory actions. As we noted in our 1996 report, the subjective nature of the implementation continued the wide discretion that regulators had in the 1980s over the timing and forcefulness of enforcement actions. did not systematically test critical internal controls such as compliance with loan underwriting policies. We recommended that a comprehensive review of internal controls be a part of bank and thrift examinations and that the condition of a bank’s or thrift’s system of internal controls receive explicit consideration in a determination of an institution’s examination rating. Insufficient review of loan quality and loan loss reserves: Effective loan quality assessment is important, because loans generally make up the majority of bank and thrift assets and involve the greatest risk. Determining the adequacy of loan loss reserves is critical because without such a determination, in combination with a proper assessment of loan quality, examiners have no reliable basis to understand an institution’s true financial condition. We recommended that examination policies require a representative sampling of loans and better documentation of loan quality and the development of a methodology to determine the adequacy of loan loss reserves. Weaknesses in detecting and ensuring corrective actions related to insider lending: Loans to insiders—such as bank directors, officers, or principals—should pose no greater risk than transactions with other bank customers. Abusive insider activities can be among the most insidious of reasons for the deterioration of the health of a bank. In 1994, we reported that examiners faced numerous impediments to determining the full extent of insider problems at banks and that such problems were not always corrected as a result of examinations. We recommended that bank regulators review insider activities in their next examination of each bank, partly by comparing data provided during the examination with information from other sources. We also recommended that federal bank regulators ensure that all directors understand their responsibility for seeing that effective corrective action is taken. Insufficient assessment of actual and potential risks of bank holding company activities to insured bank subsidiaries: In our reports, we have found that transactions between a bank holding company and its insured bank subsidiary were not always thoroughly reviewed. Such transactions include loans from the bank to other, nonbank subsidiaries; fees charged by the bank holding company to the bank subsidiary; and asset transfers from nonbank subsidiaries to the bank subsidiary. We recommended that the supervisors develop and require mandatory procedures for assessing the actual and potential risks to insured bank subsidiaries of bank holding company activities. as essential, because examiners have broad discretion and must exercise considerable judgment in planning and conducting examinations and in drawing conclusions about bank safety and soundness. Our findings led us to recommend that regulators establish policies to ensure sufficient documentation of the analysis that underlies the examination report, and thorough supervisory review of all examination and inspection procedures Finally, we have noted in past reports that improved coordination among federal and state banking supervisors could result in more efficient and effective use of examination resources. Coordination is also critical for the supervision of large banking institutions in that it could foster consistency in examinations and reduce regulatory burden. Regulators have made a number of changes in an effort to improve their examinations since the bank and thrift crises of the late 1980s, and I would like to highlight some that seem most significant. In general, these changes appear appropriate and consistent with recommendations we have made. However, we have not fully assessed the effectiveness of their implementation. When evaluating these changes, it is also important to note that they have occurred during favorable economic conditions that have contributed to strong bank and thrift profits. The most important test for the changes will be whether the information they provide in examination reports would lessen the severity of problems for banks and thrifts during any future economic downturn. One of the most significant efforts at improvement involves changes in examinations to account for a dynamic banking environment in which institutions can rapidly reposition their portfolio risk exposures. Regulators have recognized that in such an environment, periodic assessments of the condition of financial institutions based on transaction testing alone are not sufficient for ensuring the continued safe and sound operation of financial institutions. To ensure that institutions have the internal controls and processes in place necessary to identify, measure, monitor, and control risk exposures that can change rapidly, the approach regulators are taking to the examination process is evolving to emphasize evaluations of the appropriateness of such internal controls and processes instead of relying heavily on transaction testing. Regulators have changed the system they use to rate the safety and soundness of banks and thrifts to reflect an increasing emphasis on risk management and internal controls. Until January 1997, examiners used the rating system known as CAMEL (capital adequacy, asset quality, management and administration, earnings, and liquidity). Examiners were instructed in 1996 to give greater emphasis to the adequacy of an institution’s risk management processes, including its internal controls when evaluating management under the CAMEL system. On December 9, 1996, the Federal Financial Institution Examination Council added an “S” to create a new CAMELS rating, with the S representing an institution’s sensitivity to market risk. The S rating component is to represent the result of a combined assessment of both the institution’s level of market risk and its ability to manage market risk.Regulators expect the sophistication of an institution’s risk management system to be commensurate with the complexity of its holdings and activities and appropriate to its specific needs and circumstances. I should mention that in regulators’ examinations of U.S. branches of foreign banks, an emphasis on risk management and internal controls began in 1994 with implementation of a rating system known as ROCA (risk management, operational controls, compliance, and asset quality.) As I noted earlier, we have recommended that the condition of a bank or thrift’s system of internal controls receive explicit consideration in a determination of the institution’s examination rating. We have also recommended that the regulators develop and require minimum mandatory procedures to assess the actual and potential risks of bank holding company activities to insured bank subsidiaries. Increased attention to internal controls and risk management, if effectively implemented, should help enhance the regulators’ ability to keep pace with a changing banking environment. The supervisors’ effective implementation of these initiatives is essential to the success of their examination programs. Regulators also told us that they believe that these initiatives complement the prompt corrective action policies mandated by FDICIA. Other improvement efforts I’d like to highlight that we regard as consistent with our earlier recommendations include the following: Improvements in examination guidelines to detect insider lending problems: The recommendations we made in this area have been adopted by the Federal Reserve Board, FDIC, and the OCC. Specifically, examination guidance now explicitly calls for reviewing insider activities and ensuring that directors understand their responsibility for effective corrective action. Improvements under way in examination documentation and supervisory review of examination findings: Federal banking regulators have described relevant improvement efforts. For example, according to the Federal Reserve’s Framework for Risk-Focused Supervision of Large Complex Institutions, the Federal Reserve has been working to refine its standards for workpapers, especially for examinations of state member banks. Also, the Federal Reserve and FDIC have recently implemented an automated examination process to standardize documentation. Federal Reserve officials said that about 25 U.S. states, to date, have also indicated they will begin using this standardized work process. In addition, OCC issued a new policy in February 1997 describing workpaper requirements for all of its supervisory activities. Agreements to coordinate examinations by federal and state banking regulators: The Federal Reserve Board, FDIC, and state banking departments completed a single Nationwide State/Federal Supervisory Agreement in November 1996 covering state-chartered banks that open branches in other states. This agreement modifies an April 1996 State/Federal Protocol and Model Agreement by including the Federal Reserve Board as a signatory. Together, these agreements set out, among other things, the goals of supervision, division of responsibilities among the various regulators, and common examination and application processes. Federal Reserve and FDIC officials told us that implementation to date has been successful. These officials also said the examination process has been improved by assigning each institution a single case manager who is responsible for coordinating all examinations of that institution. Changes in examination procedures and in the banking industry will lead to new challenges for the supervisory agencies. A key task will be ensuring consistency in the supervisory and examination policies and practices of the agencies. Further, the agencies face the tasks of ensuring staff expertise and examining increasingly complex banking organizations. Nontraditional lines of business and interstate branching will bring increasing numbers of depository institutions under the jurisdiction of several regulatory agencies. One result of this will be a more complex task of ensuring that the regulations and enforcement actions of multiple agencies are consistent and that their examinations provide a complete picture of the banks’ and thrifts’ operations. The division of responsibilities among the bank and thrift regulatory agencies is not generally based on specific areas of expertise, functions, or activities of either the regulatory agency or the banks for which they are responsible. Rather, responsibilities are divided according to the type of charter—thrift or bank, national or state—and whether banks are members of the Federal Reserve System. Some analysts, bank industry representatives, and agency officials credit the current structure with encouraging financial innovations and providing checks and balances to guard against arbitrary oversight decisions or actions. We and others, however, have identified concerns that arise from having four agencies with similar responsibilities. These concerns include possible inconsistent treatment of institutions in examination policies and practices, enforcement actions, and regulatory standards and decisionmaking. In the case of bank holding companies, with the Federal Reserve responsible for the bank holding company and other federal regulators responsible for the banking subsidiaries, divided supervisory responsibility may hinder regulators from obtaining a complete picture of an entire banking organization. Although we recognize that only Congress can make the policy judgments in deciding whether and how to restructure the bank oversight system, we have recommended that Congress reduce the number of agencies with primary responsibilities for bank oversight. If the current structure, with multiple agencies, continues, coordinating their activities and ensuring consistency in their regulations and enforcement actions will remain difficult issues. The regulatory agencies have several initiatives under way that are intended to better coordinate their activities and ensure consistency, such as the automated examination process developed by the Federal Reserve and FDIC. Ultimately, these initiatives should be judged by their results, particularly including the quality of the examinations. that the regulators will continue to face is the need to build and maintain the expertise needed for supervising these more complex organizations. Federal regulators have hired specialists, such as economists with technical expertise in the quantitative methods and economic models underlying banks’ risk management systems and specialists in electronic banking, bank information systems, and risk management. Further, the agencies have a number of initiatives to improve the scope and quality of information that is provided to field examiners to help them understand banking activities and the risks that banks undertake. In addition, the supervisory agencies have recently completed training on the risk-focused examination process and the new CAMELS rating system. Previously, the Federal Reserve took steps to enhance examiner training on internal controls by developing an Internal Controls School in 1995 that was designed initially for examiners of U.S. branches and agencies of foreign banks and expanded to meet the needs of examiners of U.S. domestic banks. Federal Reserve officials told us that they also developed a training seminar in 1996 for examiners and in-house international supervisory staff that emphasizes ensuring the appropriate supervisory strategy for the U.S. operations of foreign banks. With the passage of interstate banking and the increased reliance of banks on lines of business other than traditional lending, we anticipate that the task of bank management will become more difficult. The bank regulatory agencies will face a similar challenge—ensuring that their examinations and enforcement strategies lead to sound management practices as banks increasingly rely on nontraditional lines of business. Since large, complex bank organizations are likely to come under the regulatory jurisdiction of several agencies, the problem of coordination that I mentioned earlier will be relevant for these organizations. Several of our recent reports point to other types of issues that are likely to become increasingly common as banks move into more complex lines of business. securities regulators, we recommended that the bank regulators work with the Securities and Exchange Commission and the National Association of Securities Dealers to develop consistent standards for investor protection and to ensure the safety and soundness of banks that are engaged in some form of securities business. In our report on the operations of foreign bank organizations in the United States, we noted deficiencies in the internal controls of these organizations. Although federal bank regulators are aware of these deficiencies and have initiatives under way that they believe will address these problems, we noted that the regulators do not have plans to evaluate the results of these initiatives. We recommended that the Federal Reserve Board develop a strategy for evaluating the outcomes of the efforts to improve the internal controls of foreign bank organizations. It will be important for the regulatory officials to develop a strategy, including objective measures, for assessing the progress they are making through their efforts to improve the examination process and to ensure that the procedures and systems necessary to collect the data relevant to those measures are in place and operating. Such objective evaluations should be useful in determining whether the examinations are achieving their intended results or whether additional initiatives may be needed. At the same time, we are encouraged by some of the changes that the bank regulatory agencies have made in their examination procedures, since they appear to address a number of the shortcomings that we had addressed in our earlier reports. As one official noted, the small number of banks in difficulty has provided the regulatory agencies with an opportunity to improve their operations. However, the business of banking has been changing at the same time, and banks are taking on new risks. Also, because of the differences in the responsibilities and the examination and enforcement approaches among regulators, such as those for the security activities of depository institutions, a key question is whether improvement will be uniformly adopted by all regulators and consistently implemented. Whether current examination strategies provide an adequate basis for the regulatory agencies to anticipate problems and take appropriate and prompt corrective actions to address those problems, especially during any future economic downturn, is unknown. Ms. Chairwoman, this concludes my statement. My colleagues and I would be pleased to respond to any questions you may have. Foreign Banks: Internal Control and Audit Weaknesses in U.S. Branches (GAO/GGD-97-181, Sept. 29, 1997). Financial Regulation: Bank Modernization Legislation (GAO/T-OCE/GGD-97-103, May 7, 1997). Bank and Thrift Regulation: Implementation of FDICIA’s Prompt Regulatory Action Provisions (GAO/GGD-97-18, Nov. 21, 1996). Bank Oversight: Fundamental Principles for Modernizing the U.S. Structure (GAO/T-GGD-96-117, May 2, 1996). Financial Regulation: Modernization of the Financial Services Regulatory System (GAO/T-GGD-95-121, Mar.15, 1995). Bank Insider Activities: Insider Problems and Violations Indicate Broader Management Deficiencies (GAO/GGD-94-88, Mar. 30, 1994). Bank Regulation: Consolidation of the Regulatory Agencies (GAO/T-GGD-94-106, Mar. 4, 1994). Bank and Thrift Regulation: FDICIA Safety and Soundness Reforms Need to Be Maintained (GAO/T-AIMD-93-5, Sept. 23, 1993). Bank and Thrift Regulation: Improvements Needed in Examination Quality and Structure (GAO/T-AFMD-93-2, Feb. 16, 1993). Bank Examination Quality: OCC Examinations Do Not Fully Assess Bank Safety and Soundness (GAO/AFMD-93-14, Feb. 16, 1993). Bank and Thrift Regulation: Improvements Needed in Examination Quality and Regulatory Structure (GAO/AFMD-93-15, Feb. 16, 1993). Bank Examination Quality: FDIC Examinations Do Not Fully Assess Bank Safety and Soundness (GAO/AFMD-93-12, Feb. 16, 1993). Bank Examination Quality: FRB Examinations and Inspections Do Not Fully Assess Bank Safety and Soundness (GAO/AFMD-93-13, Feb. 16, 1993). Banks and Thrifts: Safety and Soundness Reforms Need To Be Maintained (GAO/T-GGD-93-3, Jan. 27 1993). Bank Supervision: Prompt and Forceful Regulatory Actions Needed (GAO/GGD-91-69, Apr. 15, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed bank and thrift supervision and examination. GAO noted that: (1) bank supervision and examination today show evidence of lessons learned from the bank and thrift crises of the 1980s and early 1990s; (2) these procedures are the primary basis for federal regulatory agencies to assess the risks that banks and thrifts assume and to take actions to maintain a safe and sound banking system and protect deposit insurance funds; (3) one critical lesson of the earlier crises was that excessive regulatory forbearance contributed to the extent of the crises; (4) the Federal Deposit Insurance Corporation Improvement Act of 1991 (FDICIA) based regulatory practices on a simple principle: if a depository institution fails to operate in a safe and sound manner, it should be subject to timely and forceful supervisory response, including, if necessary, prompt closure; (5) FDICIA also required that banks reform their corporate governance and accounting practices and that the regulatory agencies improve their supervision of insured banks and thrifts; (6) in a November 1996 report, however, GAO noted that questions remain about the effectiveness of FDICIA's trip-wire provisions which are intended to limit regulatory discretion; (7) as implemented, the trip-wire that enables regulatory action at the early stage of problems in a bank does little to limit regulatory discretion; (8) in several reports in the early 1990s, GAO also noted limitations in the safety and soundness examinations conducted by the regulatory agencies; (9) the limitations included a lack of comprehensive internal control assessments, insufficient review of loan quality and loan loss reserves, weaknesses related to insider lending, and insufficient assessment of bank subsidiaries; (10) regulators have made a number of changes in an effort to improve their examinations; (11) the changes respond, in part, to the dynamic banking environment in which institutions can rapidly reposition risk exposures; (12) to ensure that banks and thrifts have the managerial ability and internal control structure to effectively manage risk, the examination process is evolving to put greater emphasis on risk management and internal controls; and (13) in its recent report on foreign banking organizations operating in the United States, GAO noted that regulators have begun to put greater emphasis on risk management processes and operational controls in examinations of these organizations. |
The sale or transfer of U.S. defense items to friendly nations and allies is an integral component of both U.S. national security and foreign policy. The U.S. government authorizes the sale or transfer of military equipment, including spare parts, to foreign countries either through government-to-government agreements or through direct sales from U.S. manufacturers. The Arms Export Control Act and the Foreign Assistance Act of 1961 authorize the DOD foreign military sales program. From 1993 through 2002, DOD delivered over $150 billion worth of services and defense articles to foreign countries through foreign military sales programs administered by the military services. The articles sold include classified and controlled cryptographic spare parts. The Department of State sets overall policy concerning which countries are eligible to participate in the DOD foreign military sales program. DOD identifies military technology that requires control when its transfer to potential adversaries could significantly enhance a foreign country’s military or war-making capability. Various agencies such as the Department of State and DOD are responsible for controlling, in part, the transfer or release of military technology to foreign countries. The Defense Security Cooperation Agency, under the direction of the Under Secretary of Defense for Policy, has overall responsibility for administering the foreign military sales program, and the military services generally execute the sales agreements with the individual countries. A foreign country representative initiates a request by sending a letter to DOD asking for such information as the price and availability of goods and services, training, technical assistance, and follow-on support. Once the foreign customer decides to proceed with the purchase, DOD prepares a Letter of Offer and Acceptance stating the terms of the sale for the items and services to be provided. After this letter has been accepted, the foreign customer is generally required to pay, in advance, the amounts necessary to cover the costs associated with the services or items to be purchased from DOD and then is allowed to request spare parts through DOD’s supply system. Generally, the military services use separate automated systems to process requisitions from foreign countries for spare parts. While the Air Force has retained responsibility for its system, responsibility for the Army’s and the Navy’s systems was transferred to the Defense Security Assistance Development Center in October 1997. The Center, which is part of the Defense Security Cooperation Agency, has overall responsibility for providing system information technology maintenance support, such as testing the system. For blanket orders, the systems use various codes and item identifiers to restrict the spare parts available to the countries. In cases where the systems validate a requisition, the requisition is sent to a supply center to be filled and shipped. In cases where the systems reject a requisition, the requisition is sent to a country manager for review. The country manager will either validate the requisition and forward it to the supply center to be filled and shipped or reject the requisition, in which case the requisition is canceled. Our reviews showed that the Army, the Navy, and the Air Force were not testing their automated systems to ensure that the systems were accurately reviewing and approving blanket order requisitions for compliance with restrictions and operating in accordance with foreign military sales policies. Our tests of the services’ automated systems used to manage foreign countries’ requisitions for spare parts made through blanket orders showed that classified and controlled spare parts that the services did not want released were being released to countries. For example, we identified 5 out of 38 blanket order requisitions for which the Navy’s system approved and released 32 circuit card assemblies’ controlled cryptographic spare parts to foreign countries. According to Defense Security Assistance Development Center officials, who are responsible for this portion of the Navy’s system, the system was not programmed to review the controlled cryptographic items codes, and as a result, the system automatically approved and released the parts requested by the foreign countries. Navy and DOD officials were unaware that the system was not reviewing controlled cryptographic parts prior to their release to foreign countries. For another example, the Air Force’s system used controls that were based on supply class restrictions and that were ineffective and resulted in erroneously approving requisitions for shipment. Included in an item’s national stock number is a four-digit federal supply class, which may be shared by thousands of items. The national stock number also contains a nine-digit item identification number that is unique for each item in the supply system. We found that countries requisitioning parts under the Air Force’s system could obtain a classified or controlled spare part by using an incorrect, but unrestricted, supply class with an item’s correct national item identification number. The Air Force was unaware of this situation until our test of the system identified the problem. In response to our work, the Air Force corrected the problem. GAO’s internal control standards require periodic testing of new and revised software to ensure that it is working correctly, while the Office of Management and Budget’s internal control standards require periodic reviews to determine how mission requirements might have changed and whether the information systems continue to fulfill ongoing and anticipated mission requirements. The importance of testing and reviewing systems to ensure that they are operating properly is highlighted in the Federal Information Security Management Act of 2002. The act requires periodic testing and evaluation of the effectiveness of information security controls and techniques. Moreover, the act requires agencies, as part of their information security program, to include a process for planning, implementing, evaluating, and documenting remedial actions to address deficiencies. Under guidance from the Office of Management and Budget, agencies are to develop a Plan of Actions & Milestones to report on the status of remediation efforts. This plan is to list information security weaknesses, show estimated resource needs or other challenges to resolving them, key milestones and completion dates, and the status of corrective actions. In commenting on our prior reports, DOD either concurred or partially concurred with our recommendations for testing the services’ requisition- processing systems. The department, however, does not have a plan specifying the remedial actions to be taken to implement these recommendations. If actions are not taken to implement testing and reviews, the potential benefits of these controls in preventing the inadvertent release of classified or controlled spare parts may not be realized. Regarding our recommendation to periodically test the Army’s system, DOD concurred and stated that testing of the Army system and its logic would occur, given the funding and guidance required to do so. In response to our recommendation to periodically test the Navy’s system, DOD concurred. Concerning our recommendation to periodically test the Air Force’s system, DOD partially concurred and stated that a program was being developed to test new modifications to the Air Force’s system and that testing of old modifications would be an ongoing effort. Testing only the modifications to the Air Force’s system, as DOD commented, will not necessarily ensure that the system’s logic is working correctly. Subsequently, the Air Force concurred with our recommendation and reported that it had modified its system and would be conducting annual tests of the system’s logic. The Defense Security Cooperation Agency and the military services are developing a new automated system, the Case Execution Management Information System, to process foreign military sales requisitions. The new system will replace the services’ existing legacy systems with a standard DOD system. DOD expects to deploy the new system in fiscal year 2007. Internal control standards requiring testing will be applicable to the new system. Our reviews showed that the Navy’s and the Air Force’s systems allowed country managers, who are responsible for managing the sale of items to foreign countries, to override system decisions not to release to foreign countries classified or controlled parts that are requisitioned under blanket orders. We identified instances where Navy and Air Force country managers overrode the systems’ decisions without documenting their reasons for doing so. For example, of the 123 Air Force requisitions we reviewed, the Air Force’s system identified 36 requisitions for country manager review. For 19 of the requisitions, the managers overrode the system’s decisions and shipped classified and controlled spare parts without documenting their reasons for overriding the system. The managers we queried could not provide an explanation for overriding the system. In 1999, the Army modified its system to reject requisitions that are made under blanket orders for classified or controlled parts. As a result, Army country managers were precluded from manually overriding the Army system’s decisions. Compared with the Navy’s and the Air Force’s systems, the Army’s system provides more stringent protection against releasing classified or controlled parts that are not authorized for release under blanket orders to foreign countries. Preventing the inadvertent release of classified and controlled spare parts that are not authorized for release under blanket orders to foreign countries deserves the highest level of management attention. The preemptive nature of testing and reviewing systems allows this internal control to identify system weaknesses prior to the inadvertent release of classified or controlled spare parts. Had the services conducted periodic tests of their systems, the instances of releasing spare parts that DOD did not want released that we identified in our reports could have been significantly reduced, if not eliminated. Developing effective corrective action plans is key to ensuring that remedial action is taken to address significant information-system internal control deficiencies. We believe the department could demonstrate its commitment to addressing this systemic weakness by providing specific information on its planned remedial actions. Documenting country managers’ decisions to override system decisions is a control that could help prevent the release of classified or controlled parts that are not authorized for release under blanket orders to a foreign country. However, modifying systems, as the Army did, to reject requisitions that are made under blanket orders for classified or controlled parts and to preclude country managers from manually overriding system decisions would provide more stringent protection against releasing classified or controlled parts that are not authorized for release under blanket orders to a foreign country. To reduce the likelihood of releasing classified and controlled spare parts that DOD does not want to be released to foreign countries, we recommend that you take the following three actions: Direct the Under Secretary of Defense for Policy, in conjunction with the Secretaries of the Army and the Navy, and direct the Secretary of the Air Force to develop an implementation plan, such as a Plan of Actions & Milestones, specifying the remedial actions to be taken to ensure that applicable testing and review of the existing requisition-processing systems are conducted on a periodic basis. Direct the Under Secretary of Defense for Policy, in conjunction with the Secretaries of the Army, the Air Force, and the Navy, to determine whether current plans for developing the Case Execution Management Information System call for periodic testing and, if not, provide for such testing. Direct the Under Secretary of Defense for Policy, in conjunction with the Secretary of the Navy, and direct the Secretary of the Air Force to determine if it would be beneficial to modify the Navy’s and the Air Force’s requisition-processing systems so that the systems reject requisitions for classified or controlled parts that foreign countries make under blanket orders and preclude country managers from manually overriding system decisions, and to modify their systems as appropriate. The Director of the Defense Security Cooperation Agency provided written comments on a draft of this report for DOD and partially concurred with one recommendation and concurred with two recommendations. DOD partially concurred with our recommendation to develop a Plan of Actions & Milestones specifying the remedial actions to be taken to ensure that applicable testing and review of the existing requisition- processing systems is conducted on a periodic basis. DOD stated that the services have made appropriate changes to their systems in response to our prior reports and routine maintenance and have tested the applications accordingly. DOD also stated that, in lieu of a formal Plan of Actions & Milestones, the military services, in concert with DOD, can institute a practice of testing the applications on an annual basis, if those applications are not otherwise changed and tested as a matter of routine maintenance, to satisfy the requirement for periodic testing. We agree that alternatives to a formal Plan of Actions & Milestones may address the needed remedial actions. However, we believe any alternative should specify the remedial actions to be taken to ensure that applicable testing and review of the existing requisition-processing systems are conducted on a periodic basis, and we have modified our recommendation accordingly. DOD concurred with our recommendation regarding the Case Execution Management Information System. DOD stated that the system’s software program testing will adhere to software-testing standards in place at the time of implementation, including testing to ensure that the functionality of existing software code is not changed when the coding is modified or enhanced. DOD also concurred with our recommendation to determine if it would be beneficial to modify the Navy’s and the Air Force’s requisition-processing systems so that the systems reject requisitions for classified or controlled parts that foreign countries make under blanket orders and preclude managers from manually overriding system decisions, and to modify their systems as appropriate. DOD stated that it will review the Navy’s and the Air Force’s business processes, as well as the requisition-processing systems. DOD noted that a better option may be to mandate that country managers seek the appropriate waivers in accordance with DOD policy to allow the release of a classified or controlled spare part under a blanket order; provide sufficient documentation for doing so; and make sure it is done as the exception, not the rule. We agree that this option would enhance the Navy’s and the Air Force’s controls and could help prevent the release of classified or controlled parts that are not authorized for release under a blanket order to a foreign country. DOD also provided technical and editorial comments, which we have incorporated as appropriate. DOD’s comments are reprinted in appendix I of this letter. As you know, 31 U.S.C. 720 requires the head of a federal agency to submit a written statement of the actions taken on our recommendations to the Senate Committee on Governmental Affairs and to the House Committee on Government Reform not later than 60 days from the date of the report and to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of the report. Because agency personnel serve as the primary source of information on the status of recommendations, we request that DOD also provide us with a copy of DOD’s statement of action to serve as preliminary information on the status of open recommendations. Please provide me with a copy of your responses. My e-mail address is solisw@gao.gov. We are sending copies of this report to Senator Tom Harkin; the Senate and House Committees on Armed Services; the Secretaries of the Army, the Navy, and the Air Force; the Director, Office of Management and Budget; and other interested congressional committees. The report is also available on GAO’s home page at http://www.gao.gov. If you have any questions, please call me at (202) 512-8365. Key contributors to this report were Thomas Gosling, Louis Modliszewski, and R. K. Wild. Key contributors to our prior reports are listed in those reports. The following are GAO’s comments on the Department of Defense’s letter dated October 8, 2004. 1. In our report, we modified the text to clarify that the parts were not authorized for release under blanket orders. The title of this report is consistent with the titles of our prior reports on this subject, as listed on page 1, and we did not modify it as DOD suggests. 2. In response to DOD’s comments, we modified the text to state that DOD does not have a plan specifying the remedial actions to be taken to implement our recommendations. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” | Under Department of Defense (DOD) policy, the export of classified and controlled spare parts must be managed to prevent their release to foreign countries that may use them against U.S. interests. GAO has issued a series of reports on the foreign military sales program in which weaknesses in the military services' internal controls were identified. This report highlights (1) a systemic problem that GAO identified in the internal controls of the military services' requisition-processing systems and (2) a potential best practice that GAO identified in one service that provides an additional safeguard over foreign military sales of classified and controlled parts. At the time GAO conducted its reviews, the Army, the Navy, and the Air Force were not testing their automated requisition-processing systems to ensure that the systems were accurately reviewing and approving blanket order requisitions for compliance with restrictions on the sale of classified and controlled spare parts and operating in accordance with foreign military sales policies. Blanket order requisitions are based on agreements between the U.S. government and a foreign country for a specific category of items for which foreign military sales customers will have a recurring need. GAO's tests of the services' requisition-processing systems showed that classified and controlled spare parts that the services did not want to be released to foreign countries under blanket orders were being released. GAO's internal control standards require periodic testing of new and revised software to ensure that it is working correctly, while the Office of Management and Budget's internal control standards require periodic reviews to determine how mission requirements might have changed and whether the information systems continue to fulfill ongoing and anticipated mission requirements. DOD either concurred or partially concurred with GAO's recommendations for testing the requisition-processing systems. The department, however, does not have a plan specifying the remedial actions to be taken to implement these recommendations. Internal control standards requiring testing also will be applicable to the Case Execution Management Information System, an automated requisition-processing system that DOD and the military services are developing to replace the existing individual military service systems. The Army's automated requisition-processing system incorporates a potential best practice that helps to prevent the release of classified or controlled parts that are not authorized under blanket orders to foreign countries. The automated systems used by the Navy and the Air Force allow country managers to override system decisions not to release to foreign countries classified or controlled parts that are requisitioned under blanket orders. GAO found instances where Navy and Air Force country managers overrode the systems' decisions without documenting their reasons for doing so. In contrast, the Army's system automatically cancels requisitions that are made under blanket orders for classified or controlled parts. Because the requisitions are automatically canceled, country managers do not have an opportunity to override the system's decisions. Compared with the Navy's and the Air Force's systems, the Army's system provides more stringent protection against releasing classified or controlled parts that are not authorized under blanket orders to foreign countries. |
For more than a decade, we have identified weak contract management and the lack of reliable financial and performance information as posing significant challenges to NASA’s ability to effectively run its largest and most costly programs. While NASA has made some progress in addressing its contract management weaknesses through improved management controls and evaluation of its procurement activities, NASA has struggled to implement a modern, integrated financial management system. NASA made two efforts in the past to improve its financial management processes and develop a supporting system intended to produce the kind of accurate and reliable information needed to manage its projects and programs and produce timely, reliable financial information for external reporting purposes. However, both of these efforts were eventually abandoned after a total of 12 years and a reported $180 million in spending. In April 2000, NASA began its third attempt at modernizing its financial management processes and systems. With its current financial management system effort, known as IEMP, NASA has invested in an ERP solution that is intended to meet the information needs of both internal and external customers and to promote standardization and integration of business processes and systems across the agency. NASA plans to complete IEMP by 2009 for a total cost of over $800 million. As of March 2007, NASA had deployed the following nine IEMP functional components: core financial, Travel Manager, ERASMUS, resume management, position description management, budget formulation, Agency Labor Distribution System, Project Management Information Improvement, and contract management. Early in fiscal year 2007, NASA also implemented an updated version of the core financial software, which includes several critical enhancements to the previous core financial software. According to NASA, the core financial upgrade provided the opportunity for it to leverage the best practices inherent in the new version and allowed it to redesign or enhance business processes. NASA updated its core financial system in order to improve compliance with Federal Financial Management System Requirements, Federal Accounting Standards, and the Federal Financial Management Improvement Act, and to respond to GAO recommendations. According to NASA, the software upgrade has enabled it to implement critical process changes related to financial tracking and reporting, support the goal of achieving financial management integrity, and provide better project management information. NASA claims that the updated software has also provided other enhancements, which should contribute to NASA’s goals of achieving a clean audit opinion and achieving a “Green” rating on the President’s Management Agenda scorecard for “improved financial performance.” Other IEMP modules that NASA plans to implement in the future include aircraft management and asset management. As discussed previously, we issued a series of four reports in April and November 2003 that detailed weaknesses in NASA’s acquisition and implementation strategy for IEMP in general and the core financial module in particular. The core financial module, which utilizes SAP software and is considered the backbone of IEMP, was implemented in June 2003. Because NASA did not follow key best practices or disciplined processes for acquiring and implementing IEMP, we reported that NASA had made a substantial investment in a financial management system that fell far short of its stated goal of providing meaningful, reliable, and timely information to support effective day-to-day program management and external financial reporting. We noted problems in the areas of requirements development, requirements management, testing, performance metrics, risk management, and business process reengineering. Neither program managers nor cost estimators were involved in the process of defining requirements for the core financial module. As a result, the module was not designed to maintain the level of detailed cost information needed by program managers to perform contract oversight and by cost estimators to develop reliable cost estimates. The requirements management methodology and tools used to implement the core financial module did not result in requirements that were consistent, verifiable, and traceable or that contained enough specificity to minimize requirement-related defects. Because NASA had not effectively implemented disciplined requirements management processes, we reported that it had increased the risk that it would not be able to effectively identify and manage the detailed system requirements necessary to properly acquire, implement, and test the core financial module. NASA’s ability to effectively test the core financial module was limited because of the lack of complete and specific requirements. Industry best practices, as well as NASA’s own system planning documents, indicated that detailed system requirements should be documented to serve as the basis for effective system testing. Because the link between these two key processes was not maintained, NASA had little assurance that all requirements were properly tested. NASA also did not effectively capture the type of metrics that could have helped the agency understand the effectiveness of its IEMP management processes. For example, NASA did not employ metrics to help it identify and quantify weaknesses in its requirements management processes. Because of its lack of performance metrics, NASA was unable to understand (1) its capabilities to manage IEMP projects; (2) how its process problems affected cost, schedule, and performance objectives; and (3) the corrective actions needed to reduce the risks associated with the problems identified. NASA did not consistently identify known and potential risks for the core financial module. Risk management processes are needed to ensure that a project’s risk is kept at an acceptable level by taking actions to mitigate risk before it endangers the project’s success. NASA did not use the implementation of IEMP to fundamentally change the way it did business. Instead of reengineering its business processes, NASA automated many of its existing ineffective business processes. First, NASA did not design the system to accommodate the information needed to adequately oversee its contracts and programs and to prepare credible cost estimates. Second, NASA did not reengineer its contractor cost reporting processes and therefore, did not always obtain sufficient contract cost information needed by program managers to oversee contracts and needed by financial managers for external financial reporting. When we last reported on NASA’s IEMP effort, in September 2005, NASA had begun to implement a number of recommendations from our earlier reports—including steps toward implementing the disciplined processes necessary to manage IEMP. For example, we reported that NASA had engaged program managers to identify program management needs, implemented new requirements management and testing processes, and developed metrics to evaluate the effectiveness of its system implementation processes. However, at that time, the agency had not implemented several of our other recommendations, including the following: Properly define and document system requirements for already- deployed IFMP modules, including the core financial module. This is important not only because it would affect the way the core financial module functions but also because it would affect NASA’s ability to implement future upgrades and other modules expected to interface with the core financial module. Enhance regression testing processes and performance metrics. Develop a risk mitigation plan. Reengineer its business processes so that the commercial off-the-shelf software products selected for IEMP could support these processes. At the time of our last report, NASA was making plans to reengineer some of its business processes. However, because the agency was in the very early planning stage of implementing this recommendation, the details for how NASA would accomplish its objectives were still vague. Overall, our September 2005 report concluded that it was not possible to assess whether NASA’s plans would accomplish its stated goal of enhancing the core financial module to provide better project management information for decision-making purposes. Since September 2005, when we last reported on NASA IEMP implementation efforts, NASA has implemented some of the disciplined processes needed to manage IEMP. Specifically, NASA has, as we previously recommended, implemented more effective requirements management and testing processes, improved its performance metrics program related to tracking system defects, and developed an IEMP risk management strategy. In addition, NASA has developed quantitative entry and exit criteria for moving from one phase of an IEMP project to another—a recognized industry best practice. However, weaknesses in the areas of requirements development and project scheduling have undermined some of the progress made in other key areas. As a result, NASA struggled to complete required systems testing and deliver the agency’s core financial upgrade. Ultimately, through the heroic efforts of the core financial upgrade team, NASA delivered the upgrade within about 2 weeks of the October 30, 2006, planned completion date. According to NASA officials, the system is functioning as expected for most transactions. However, until the end of March 2007, the upgrade was in a “stabilization” phase as NASA worked on correcting a number of system errors, including posting errors for certain types of transactions. Because the upgrade was still quite new and NASA was continuing to stabilize the system, we were unable to determine the significance of these weaknesses. Since our September 2005 report, NASA has used its new requirements management process—which documents sufficiently detailed requirements that are traceable from the highest (most general) level to the lowest (most detailed) level in NASA’s requirements management system—for both the core financial upgrade and the contract management module. For example, we selected several requirements for both the core financial module and the contract management module and validated that the requirements management process (1) clearly linked related requirements consistent with industry standards and (2) contained the information necessary to understand how each requirement should be implemented and tested in a quantitative manner. Because NASA developed and is now using a disciplined requirements management process, it has the quantitative information necessary to support disciplined testing processes. NASA’s disciplined testing processes include (1) documentation of the scenarios that need to be tested to obtain adequate test coverage, (2) requirements that are traced to the test cases to ensure that all requirements are tested, (3) instructions and other guidance for the testers, and (4) an effective regression testing program. Although NASA had disciplined requirements management and testing processes in place for the implementation of both the contract management module and the core financial upgrade, difficulties related to requirements development and project scheduling, discussed later, forced NASA to compress the testing phase of its core financial upgrade implementation. As a result, according to NASA officials, completion of testing for the core financial upgrade required an extraordinary effort on the part of NASA’s implementation team. Since we last reported, in September 2005, NASA has also enhanced its metrics measurement program, which is used to evaluate the effectiveness of its project management processes by identifying the causes of process defects. Understanding the cause of a defect is critical to evaluating the effectiveness of an organization’s project management processes, such as requirements management and testing. For example, if a significant number of defects are caused by inadequate requirements definition, then the organization knows that corrective actions are needed to improve the requirements definition process. When we last reported, NASA had made progress in this important area by collecting information on the causes of system defects it identified in its regression testing efforts but was not collecting similar information on defects identified by users and lacked a formal process for fully analyzing the data related to system defects by identifying the trends associated with them. Since that time, NASA has developed additional metrics to track and analyze such things as the number of changes made to requirements while a system is under development. In addition, NASA has developed processes for tracking and analyzing defects identified by IEMP users. For example, since implementation of the core financial upgrade, NASA has maintained spreadsheets showing specific information on each service request submitted by users, including the type of defect involved and the status of the request. Finally, NASA has also developed a comprehensive risk management strategy. Specifically, NASA now has an IEMP Risk Management Plan that outlines the standard processes and techniques for identifying, analyzing, planning, tracking, and controlling risks as well as defining the roles and responsibilities for each level of project risk management. In applying these techniques to the core financial upgrade, NASA officials documented the risks that they identified for the project, as well as their mitigation strategies, likelihood, consequence, and criticality. According to NASA officials, their risk management process worked well and was one of the key reasons for the success of the core financial upgrade. For example, using the metrics information discussed previously, NASA officials said they were able to assess the risks of changing requirements late in the project and then mitigate those risks by performing additional testing. In addition to the disciplined processes discussed above, NASA has also taken action to establish the use of quantitative entry and exit criteria to move from one phase of an IEMP project to another. The use of such criteria is considered an industry best practice. Entry criteria are the minimum essential items considered necessary to enter into a given project phase, while exit criteria are the minimum essential items necessary to consider a given project phase successfully completed. For example, the NASA entry criterion for moving into the regression testing phase requires that all remaining significant defects from the integration testing phase be resolved and successfully retested before regression testing can begin. NASA demonstrated application of this criterion when it implemented the contract management module. About 3 weeks before the scheduled start date of regression testing, the project had not yet successfully completed all test scenarios, and several significant defects had not been fully resolved. In addition, a series of critical corrections from the software vendor had not yet been delivered, and the project team agreed that there would not be adequate time to test the corrections prior to beginning the scheduled regression testing. Consequently, the team decided to push back the scheduled date for the contract management module to begin operating. For the core financial upgrade, NASA officials said that they used entry and exit criteria as one of the management tools to determine whether the project should move forward. However, rather than adopt a “hard stop” approach when criteria were not met, they used the criteria to make sure that all appropriate factors were considered before moving forward, including the risks of not meeting certain criteria. Any instances in which the project team thought exceptions to the criteria were warranted were ultimately reviewed and decided on by higher levels of NASA management, which helped ensure that such decisions were adequately considered. Weaknesses in the areas of requirements development and project scheduling offset some of the benefits associated with NASA’s improved requirements management and testing processes—causing NASA to assume a greater risk that it would not identify significant system defects prior to implementation. Weaknesses in requirements development and project scheduling processes resulted in NASA having to compress the testing phase of its core financial upgrade implementation. As a result, according to NASA officials, NASA’s ability to complete testing for the core financial upgrade within the planned implementation time frames ultimately depended on the extraordinary effort put forth by NASA’s project implementation team. Because of weaknesses in NASA’s requirements development process, it did not have reasonable assurance that it identified all appropriate requirements for the core financial upgrade when the project began. Consequently, NASA continued making changes to the requirements very late in the project’s development, resulting in increased risks, delays, and a compressed testing schedule. Improperly defined or incomplete requirements have commonly been identified as a root cause of system failure. Although NASA made a concerted effort, as part of its core financial system upgrade, to involve program managers and other key stakeholders in the requirements development process, it did not follow standard industry practices for identifying and documenting user requirements. According to the Software Engineering Institute (SEI), to help ensure that critical requirements are identified, an organization should have a well- documented, disciplined requirements development process that, among other things, (1) defines how customer needs will be elicited, developed, and validated; (2) specifies how to identify and ensure involvement of relevant stakeholders; and (3) ensures that people involved in the requirements development process are adequately trained in such topics as requirements definition and analysis. In addition, it is critical that requirements flow from an organization’s business requirements or its concept of operations. However, as discussed later, NASA has not yet completed a concept of operations. In developing its core financial upgrade requirements, NASA established a task force, consisting of both financial and program managers, whose primary objective was to “review, assess, and document Program/Project Management requirements as they relate to financial management.” In addition, other groups of program managers were asked to review the requirements and provide input to the task force. However, according to NASA officials, they have not yet documented and institutionalized requirements development procedures as recommended by SEI. Lacking documentation, NASA cannot ensure that appropriate procedures are followed and that all appropriate stakeholders are included in the process so that all requirements are identified. Moreover, the requirements that were addressed by the task force and user groups were at a very high or general level and therefore, lacked a level of specificity that is needed to ensure that users’ needs are met. Because it did not have a well-documented, disciplined requirements development process in place to provide reasonable assurance that all requirements had been identified, NASA delayed finalizing the system’s expected functionality until April 2006—about 6 months before the upgrade was expected to be implemented—and continued to change some requirements for several months after that. Delays in finalizing the requirements contributed to delayed testing and a compressed testing schedule. To meet the planned October 30, 2006, implementation date, the three rounds of system testing for the core financial upgrade were scheduled to occur from mid-June through September 22, with less than a week between each round. A less compressed schedule could have allowed more time between the testing cycles to perform necessary actions, such as additional development work and testing to adequately address the defects that had been identified. This, in turn, could have reduced the risk that significant system defects would not be detected prior to implementation. One key to developing a realistic project schedule is determining the sequence of activities, which requires identifying and documenting the dependencies among the various project activities. For example, testing activities cannot be completed before the software being tested is developed, and software should not be developed until requirements have been defined. However, NASA did not document the dependencies among the detailed project tasks for the core financial upgrade and therefore, did not have reasonable assurance that the project schedule established at the start of the project was realistic. According to NASA officials, they recognized this risk and adopted several processes to identify and mitigate the weakness, such as having knowledgeable project officials review the schedule and holding weekly status meetings to determine whether the tasks were on schedule. While the techniques used by NASA to constantly evaluate and adjust the schedule are considered best practices and allowed NASA to gain confidence in the schedule as the core financial upgrade project progressed, they were not sufficient to ensure that the original schedule was reasonable because they relied on ad hoc processes rather than a formal task dependency analysis. If NASA had also identified the task dependencies for the core financial upgrade, it would likely not have had to rely on extraordinary efforts to complete the project. Rather, project management would have been in a better position to assess the difficulty in meeting the planned schedule and to take further steps to reduce this risk, such as scaling back some aspects of the project or adding more resources to the project. According to NASA officials, through the heroic efforts of IEMP staff— their knowledge and experience with past projects and a considerable amount of overtime invested—the core financial project team was able to complete testing and other work within about 2 weeks of the planned implementation date. Although NASA has made significant improvements in its project management processes, NASA management recognizes that weaknesses in its requirements development and project scheduling processes have undermined some of the progress made. Despite the implementation difficulties, NASA financial managers have indicated that the core financial upgrade is now functioning as expected for most transactions. Through the end of March 2007, the upgrade was in a “stabilization” phase as NASA continued to work on correcting a number of system errors, including posting errors for certain types of transactions. Because NASA was continuing to stabilize the system during most of our audit period, we were unable to determine the significance of these weaknesses. Although NASA has significantly improved its processes for implementing IEMP projects, these improvements are directed at implementing the desired functionality for an individual project. NASA has not yet fully considered the higher-level strategic issues that affect how useful IEMP will be in addressing long-standing management challenges—including problems associated with stovepiped systems and parochial interests of individual NASA components as well as problems in overseeing contractor performance and properly accounting for its property, plant, and equipment. NASA envisions IEMP to be a leading-edge business system that will provide management information needed for mission success, meet the needs of internal and external customers, and promote standardization and integration of business processes and systems across NASA. To achieve this vision, it is critical that NASA develop an agencywide concept of operations and adopt standard business processes that are supported by its software. NASA officials stated that they have undertaken a critical first step to achieving their vision for IEMP—they have begun developing a concept of operations to describe how all of its business processes should be carried out. NASA created a framework for developing a concept of operations in fiscal year 2006 and plans to complete it by the summer of 2008, according to NASA officials. Ideally, a concept of operations should be completed before system development begins so that it can serve as a foundation for system planning and requirements development. Nonetheless, the completion of such a document even at this late stage in NASA’s IEMP effort would be beneficial for the development of the remaining IEMP modules as well as any future upgrades to the core financial module. In addition, once a concept of operations is complete, NASA could reassess the modules that are already implemented and determine whether and how they might need to be modified to best meet its agencywide needs. A concept of operations defines how an organization’s day-to-day operations are (or will be) carried out to meet mission needs. The concept of operations includes high-level descriptions of information systems, their interrelationships, and information flows. It also describes the operations that must be performed, who must perform them, and where and how the operations will be carried out. Further, it provides the foundation on which requirements definitions and the rest of the systems planning process are built. Normally, a concept of operations document is one of the first documents to be produced during a disciplined development effort and flows from both the vision statement and the enterprise architecture. According to Institute of Electrical and Electronics Engineers (IEEE) standards, a concept of operations is a user-oriented document that describes the characteristics of a proposed system from the users’ viewpoint. The key elements that should be included in a concept of operations are major system components, interfaces to external systems, and performance characteristics such as speed and volume. For NASA, an effective concept of operations would describe, at a high level, (1) how all of the various elements of NASA’s business systems relate to each other and (2) how information flows among these systems. Further, a concept of operations would provide a useful tool to explain how business systems at the agency can operate cohesively. It would be geared to a NASA-wide solution rather than individual stovepiped efforts. Further, it would provide a road map that can be used to (1) measure progress and (2) focus future efforts. While NASA’s enterprise architecture efforts, when fully completed, can be used to help understand the relationships between the various systems, a concept of operations document presents these items from the users’ viewpoint in nontechnical terms. Such a document would be invaluable in getting various stakeholders, including those in the programs and administrative activities, to understand how the business systems are expected to operate cohesively and how they fit into “the big picture.” As part of an agencywide concept of operations, to best leverage its investment in IEMP, NASA should also analyze the agency’s current business processes and determine how these processes can be made more efficient and effective. Specifically, NASA needs to ensure that the business processes supported by this system are developed and implemented to support the enterprise’s needs rather than primarily focusing on the needs of a specific organizational entity. For example, system efforts targeted only at addressing accounting or external financial reporting needs—as was done during the initial implementation of the core financial module—do not provide reasonable assurance that the needs of the mission managers or other support organizations are addressed as well. Our review identified an important opportunity for NASA to leverage its investment in IEMP by using the system’s inherent business processes to meet the enterprise’s needs. Agencies such as NASA that invest in ERP solutions to meet their enterprise needs often face difficulty in shifting from the stovepiped processes of the past to the enterprise processes that underlie the ERP concept. According to technical experts, a key benefit of an effective ERP system is that the system provides the entire entity consistent data regardless of which entity component generates a request or for what purpose; the system maintains data based on the concept of “one truth.” In other words, in non-ERP environments, one system may have one amount for an agency’s obligations while another system has another amount for the same obligations. While either of these systems may be the “official system,” actions and plans may be based on information in the other system. In order for all of an organization’s actions and plans to be consistent, the same information needs to be available and used by all segments of that organization. Under the ERP concept, it does not matter whether an individual is in budget, accounting, procurement, or any other organizational component; the answer to the question of “how much money has been obligated and how much is still available” is consistent. One example of an opportunity for NASA to use enterprise processes to accomplish multiple needs is in the area of program oversight and accounting for PP&E. NASA typically spends about 85 percent of its budget procuring goods and services from its contractors each year. Therefore, much of the cost information NASA needs to oversee its programs and compile its external financial reports resides with its contractors. For its larger contracts, NASA generally obtains cost data from monthly contractor financial management reports, commonly referred to as NASA Form 533s. NASA Form 533 captures planned and actual contract costs and, according to NASA officials, is used for budgeting, monitoring contract costs, and controlling program resources. The Office of the Chief Financial Officer (OCFO) also uses NASA Form 533 to capture the costs reported on the agency’s financial statements. However, NASA Form 533 does not contain information related to the status of work performed on a contract. Therefore, for all major acquisitions and for development or production contracts and subcontracts valued at $20 million or more, in addition to NASA Form 533s, NASA’s contractors are also required to provide monthly contract cost performance reports. Each of these reports is treated as a stovepiped activity; that is, they provide cost information for a given contract in two different formats and are used by different organizations and for different purposes within NASA. For those contracts for which NASA receives contract cost performance reports in addition to Form 533s, program managers use the cost performance reports to monitor contract performance, while the OCFO uses NASA Form 533 to accrue costs that, among other things, are reported on the agency’s financial statements. Although NASA Form 533 and the cost performance report reflect cost data pertaining to the same contract, the level of detail provided in each report may vary considerably depending on the contractor cost reporting requirements negotiated as part of the contract. For example, the cost data required by program managers to manage major acquisitions are often more detailed than those required by the OCFO. In addition, because neither the cost performance report nor NASA Form 533 contains the information needed by the OCFO to properly account for equipment and other property acquired from contractors, NASA also relies on periodic, summary-level information provided by its contractors to report property amounts on its financial statements. When NASA initially implemented its IEMP core financial module in June 2003, it did not adequately consider program managers’ needs and did not design the system to accommodate the more detailed cost data contained in contractor cost performance reports. Since that time, NASA has redesigned the coding structure embedded in the core financial module to be more consistent with the work breakdown structure (WBS) coding used by program managers. However, NASA continues to use cost data from NASA Form 533—generally reported by contract line items—to populate the core financial module. As a result, as shown in figure 1, NASA uses a complex, NASA-specific process to allocate the costs reported on NASA Form 533 to the WBS codes in IEMP based on available funding. In a very simplified example, if NASA received a Form 533 showing $1,000 of cost incurred for a particular contract line item and two WBS codes pertained to that line item, NASA would allocate the costs to those two WBS codes. Assuming WBS 1 had more funding available than WBS 2, NASA might allocate $600 to WBS 1 and $400 to WBS 2. However, the contract cost performance report might show that the actual costs were $500 for WBS 1 and $500 for WBS 2. Although this allocation process reorganizes cost data reported on NASA Form 533 into the same reporting structure that is used by program managers, it still results in different costs, maintained in different systems, used for different purposes. Accordingly, these separate processes do not result in the “one truth” that is provided when an ERP view is taken. Further, this dual reporting approach has not addressed one of NASA’s long-standing financial reporting weaknesses: reporting on its PP&E. For example, NASA’s processes do not allow the agency to identify capital costs—that is, those that should be recorded as assets—as they are incurred. Instead, as we recently reported, the agency performs a retrospective review of transactions entered into its property system to determine which costs should be capitalized. This subsequent review is labor-intensive and error-prone, and therefore increases the risk that not all related costs will be properly captured and capitalized. Figure 2 provides an example of how NASA could use IEMP to implement an enterprise process that (1) provides the necessary data for the enterprise operations and (2) reduces the burden on NASA and contractor officials. As shown in figure 2, if NASA received only one monthly report containing contract cost data reported in sufficient detail for both program management and financial reporting purposes, then it could record these costs directly in IEMP without first going through an allocation process as it does now. All individuals and components throughout NASA could then use the same cost data that reside within IEMP for a given contract; IEMP could provide different arrays of cost information based on each user’s needs, but all cost information for a given contract would come from one source. For example, as shown in figure 2, the program manager could use the cost data from IEMP along with other supplemental contractor performance information, such as labor hours used, to see if the project is meeting expectations. In addition, if discrete WBS codes were established to identify the costs associated with the acquisition of property, then IEMP could automatically capitalize those costs and financial managers could readily determine how much cost has been recorded for property. The key is that under the enterprise process concept, single data entry is used for multiple purposes. Since the enterprise view provides “one truth,” an adequate audit trail over the data used to report property can be maintained simply by reviewing the cost reports that were provided by the contractors. Thus, NASA can take advantage of the efficiencies inherent in an ERP solution by allowing the data needed for external financial reporting to be produced as a by-product of the processes it uses to manage its mission. NASA has made significant strides in developing and implementing more disciplined processes for supporting its IEMP efforts since our last report in 2005. NASA has recognized the need for the disciplined processes necessary to reduce risks to acceptable levels, as evidenced by its implementation of several of our recommendations. More importantly, NASA officials recognize that improving system implementation processes is a continuous effort and that certain processes—particularly requirements development and project scheduling—may need more attention. However, the real key to realizing NASA’s IEMP vision is for NASA’s management to develop an overarching strategy for managing its agencywide management system development effort. We are encouraged that NASA has begun to develop a concept of operations. As part of the development of this document, it will be critical for NASA to define (1) the agency’s business processes and information needs and (2) the types of systems that will be used to carry out these processes and produce the necessary information. Another critical factor in developing a concept of operations will be analyzing the agency’s current business processes and determining how these processes can be made more efficient and effective. For example, NASA can take advantage of the efficiencies inherent in the solution it has selected by utilizing an enterprise view to produce the data needed for external financial reporting as a by-product of the processes it uses to manage its mission. Unless NASA devotes immediate, focused attention to taking these critical strategic planning steps, it will continue to face the risk that its planned $800 million investment in IEMP will not achieve the transformational changes necessary to provide NASA with the information needed to make well- informed business decisions and to effectively manage its operations. To help ensure that disciplined processes are effectively implemented for future IEMP modules, upgrades, or other business systems, we recommend that the NASA Administrator direct the IEMP Program Director to take the following two actions. Establish requirements development policies and procedures regarding (1) how customer needs will be elicited, developed, and validated; (2) how to identify and ensure the involvement of relevant stakeholders; and (3) required training in such topics as requirements definition and analysis to be provided to people involved in the requirements process. Develop policies and procedures that require project schedules to include the identification and documentation of dependencies among various project tasks. To help ensure that future IEMP projects are designed to carry out NASA’s mission in an efficient manner that meets the needs of all users, we recommend that the NASA Administrator establish as a high priority the completion of a concept of operations that addresses NASA’s business operations for both its mission offices and administrative offices (such as financial management and human capital) before any new implementation efforts begin. Once the concept of operations is complete, we recommend that the NASA Administrator review the functionality of previously implemented IEMP modules for the purpose of determining whether enhancements or modifications are needed to bring them into compliance with the concept of operations. To help ensure that NASA receives the maximum benefit from its reported $800 million investment in IEMP, we recommend that the NASA Administrator establish policies and procedures requiring approval to establish or maintain business processes that are inconsistent with the processes inherent in the COTS solutions selected for IEMP. The reasons for any decisions made to not implement the inherent COTS processes should be well-documented and approved by the Administrator or his designee. At a minimum, approved documentation should address any decisions to maintain current contractor cost reporting processes rather than revise these processes to facilitate the use of one consistent source of cost data. We received written comments on a draft of this report from NASA, which are reprinted in appendix II. NASA agreed with our recommendations and described the approach and steps it is taking or plans to take to improve its enterprise management system modernization efforts. We are encouraged that a number of these steps are already under way, including the establishment of an IEMP advisory body representing NASA’s missions and centers. As NASA progresses in addressing our recommendations, it is important that it focuses on the concepts and underlying key issues we discussed, such as considering the need to reengineer key business processes to support agencywide needs and to take full advantage of its ERP solution. We continue to believe that careful consideration of all of the building blocks and key issues we identified will be integral to the success of NASA’s efforts. NASA also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you announce its contents earlier, we will not distribute this report further until 30 days from its date. At that time, we will send copies to interested congressional committees, the NASA Administrator, and the Director of the Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact McCoy Williams at (202) 512-9095 or williamsm1@gao.gov or Keith Rhodes at (202) 512-6412 or rhodesk@gao.gov. Key contributors to this report are acknowledged in appendix III. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. To determine whether the National Aeronautics and Space Administration (NASA) has improved its management processes for implementing the Integrated Enterprise Management Program (IEMP), we reviewed project management documentation for several IEMP projects, including the core financial upgrade and the contract management module. The documentation we reviewed for these projects included requirements management documents, detailed testing plans, project schedules, risk management plans, and metrics documentation. We also interviewed numerous IEMP officials, including the IEMP Director, the Director and Assistant Director at the IEMP Competency Center, and the Manager of IEMP Application Development and Software Assurance. In addition, we interviewed the leader of a NASA team that provided an independent assessment of the core financial upgrade project to obtain his views of IEMP management processes. To assess NASA’s implementation of disciplined processes, we reviewed industry standards and best practices from the Institute of Electrical and Electronics Engineers, the Software Engineering Institute, and the Project Management Institute. To assess the effectiveness of NASA’s requirements management processes, we performed a traceability analysis of several requirements for both the contract management module and the core financial upgrade, which demonstrated that there was traceability among the different levels of requirements and with testing documentation. To determine whether NASA had adequately and systematically determined the information needs of key users of IEMP data when developing system requirements, we reviewed documentation of NASA’s requirements identification effort for the core financial upgrade and interviewed a number of program managers and staff who worked on various space and science programs at three NASA centers—Marshall Space Flight Center, Johnson Space Center, and Goddard Space Flight Center. We also met with officials from the Office of the Chief Financial Officer (OCFO), including the Deputy Chief Financial Officer, and with officials from the Office of the Chief Engineer to obtain their opinions regarding the requirements of the core financial upgrade. In addition, we discussed the requirements development methodology with IEMP management. To determine the results of the implementation of the core financial upgrade, we met with both IEMP and OCFO officials. We reviewed data on the amount and types of system defects that were identified by users during the project’s stabilization phase. We also obtained written responses to specific questions about the results of the implementation from three NASA centers. To determine the extent to which NASA has considered the higher-level strategic issues associated with developing an enterprisewide concept of operations and defining standard business processes, we met with senior management from IEMP and the OCFO. In addition, we also discussed these issues with senior officials in the Office of the NASA Administrator. We also interviewed IEMP officials about NASA’s current processes for recording contract costs. We also discussed this issue with officials from the OCFO, the Office of the Chief Engineer, and the Office of Program and Institutional Integration. In addition, we obtained documentation of NASA’s plans for reengineering processes related to the costs of capital assets. We briefed NASA officials on the results of our audit, including our findings and their implications. On May 25, 2007, we requested comments from NASA and we received them on June 21, 2007. NASA also separately provided technical comments. Our work was performed from January 2006 through June 2007 in accordance with U.S. generally accepted government auditing standards. In addition to the contacts named above, staff members who made key contributions to this report were Diane Handley, Assistant Director; J. Christopher Martin, Senior Level Technologist; Francine DelVecchio; Kristi Karls; and Lauren Catchpole. | Since 1990, GAO has designated the National Aeronautics and Space Administration's (NASA) contract management as an area of high risk in part because it lacked modern systems to provide accurate and reliable information on contract spending. In April 2000, NASA began a system modernization effort, known as the Integrated Enterprise Management Program (IEMP). When GAO last reported on the status of IEMP in September 2005, NASA had begun to implement disciplined processes needed to manage IEMP, but had yet to implement other best practices such as adopting business processes that improve information on contract spending. This GAO report addresses (1) actions taken by NASA to effectively implement the disciplined processes needed to manage IEMP and (2) the extent to which NASA has considered the strategic issues associated with developing a concept of operations and defining standard business processes. GAO interviewed NASA officials and obtained and analyzed documentation relevant to the issues. Since GAO last reported on NASA's IEMP efforts, NASA implemented its IEMP contract management module and upgraded the software used for its core financial module. NASA has also taken steps to improve its processes for managing IEMP--including implementing improved requirements management and testing processes, enhancing its performance metrics related to tracking system defects, and developing an IEMP risk mitigation strategy. Further, NASA has developed quantitative entry and exit criteria for moving from one phase of an IEMP project to another--a recognized industry best practice. However, NASA has not yet addressed weaknesses in the areas of requirements development and project scheduling, which ultimately caused the agency to assume a greater risk that it would not identify significant system defects prior to implementation of the core financial upgrade. Despite these difficulties, NASA financial managers have stated that the core financial upgrade is now functioning as expected for most transactions. As of the end of GAO's audit work in May 2007, NASA was working to correct a number of system errors, including posting errors for certain types of transactions. Because NASA was still working to stabilize the system, GAO was unable to determine the significance of these weaknesses. Further, NASA has not yet fully considered higher-level strategic issues associated with developing an agencywide concept of operations and defining standard business processes. With a planned investment of over $800 million for IEMP, NASA must immediately and effectively address these strategic building blocks if IEMP is to successfully address long-standing management challenges--including overseeing contractor performance and properly accounting for NASA's property, plant, and equipment. NASA officials stated that they have begun developing a concept of operations to describe how all of its business processes should be carried out. According to NASA officials, they expect to complete the concept of operations by the summer of 2008. Ideally, a concept of operations should be completed before system development begins so that it can serve as a foundation for system planning and requirements development. Nonetheless, while NASA's IEMP efforts are already well under way, the completion of such a document remains essential for guiding the development of the remaining IEMP modules as well as any future upgrades. As part of developing a concept of operations, NASA should also define standard business processes that are supported by its IEMP software. NASA needs to ensure that its business processes and the information that flows from those processes support the enterprise's needs. Efforts that primarily focus on the parochial needs of a specific organizational unit, such as accounting, do not provide reasonable assurance that NASA's agencywide management information needs are addressed. |
Western Hemisphere countries have gone beyond their multilateral trade commitments during the past decade and pursued economic integration through numerous free trade and customs union agreements. The largest of these are Mercosur, signed in 1991, and the North American Free Trade Agreement (NAFTA), which entered into force in 1994. Other regional groups such as the Central American Common Market, the Andean Community, and the Caribbean Community have either been initiated or expanded. (See app. I for more information on the 34 countries of the Free Trade Area of the Americas.) Also, countries in the region have concluded numerous bilateral free trade and investment agreements with others in the region and worldwide. In addition, Chile and the European Union have recently started trade negotiations, while similar European Union and Mercosur negotiations are already under way. In December 1994, the heads of state of the 34 democratic countries in the Western Hemisphere agreed at the first Summit of the Americas in Miami, Florida, to conclude negotiations on a Free Trade Area of the Americas (FTAA) no later than 2005. The FTAA would cover a combined population of about 800 million people, more than $11 trillion in production, and $3.4 trillion in world trade. It would involve a diverse set of countries, from some of the wealthiest (the United States and Canada) to some of the poorest (Haiti) and from some of the largest (Brazil) to some of the smallest in the world (Saint Kitts and Nevis). Proponents of the FTAA contend that a successful negotiation could produce important economic benefits for the United States. The FTAA region is already important economically for the United States, purchasing about 36 percent of U.S. exports of goods and services in 1999 and receiving over 23 percent of U.S. foreign direct investment. Business groups point out that if relatively high tariffs and other market access barriers are removed, U.S. trade with the region could expand further. U.S. exports to many FTAA countries face overall average tariffs above 10 percent, whereas all 33 other countries participating in FTAA negotiations already have preferential access to the U.S. market on certain products through unilateral programs or NAFTA. In addition, some U.S. industry representatives assert that they have lost sales and market share to competitors that have preferential access into other Western Hemisphere markets through bilateral free trade agreements that exclude the United States. For example, the U.S. Trade Representative testified before the House Committee on Ways and Means in March 2001 that because of the Canada-Chile trade agreement, Canadian products will enter Chile duty free, while U.S. products face an 8 percent duty. The FTAA would help remedy this disadvantage by providing U.S. exporters with access equivalent to that provided to U.S. competitors. Supporters also assert that the FTAA would benefit the United States by stimulating increased trade and investment and enabling more efficient production by allowing businesses to produce and purchase throughout an integrated hemisphere. Beyond these economic benefits, the FTAA is widely regarded as a centerpiece of efforts to forge closer and more productive ties to Western Hemisphere nations, increase political stability, and strengthen democracy in the region. While an FTAA may provide benefits for the United States, it may also adversely impact certain import-competing sectors. Some U.S. business and labor groups argue that import restrictions are necessary to help them to compete against imports produced with more favorable labor costs, less restrictive environmental regulations, or imports that receive government assistance. Also, some labor and environmental groups argue that potential FTAA provisions may reduce the ability of countries to set and enforce high standards for health, safety, and the environment. For example, some opponents are concerned that the FTAA would contain NAFTA-like investment provisions, which they argue give corporations a greater ability to challenge government regulations than is provided for under domestic law. Finally, as is the case with other international trade agreements, the FTAA has drawn the attention of organizations and individuals apprehensive about the FTAA’s effects on greater global integration and the resulting impact on society and the environment. Between December 1994 and March 1998, FTAA countries laid the groundwork for an FTAA. Efforts over the past 18 months have produced a first draft of text on the major negotiating topics, which will constitute the basis from which negotiations will proceed in those areas. The FTAA negotiations have also resulted in the adoption and partial implementation of several business facilitation measures and improved coordination between FTAA countries on trade matters. In the first years of the FTAA process, FTAA negotiators agreed on the overall structure, scope, and objectives of the negotiations. FTAA participants formally initiated the negotiations at the San José Ministerial and Santiago Summit of 1998, where they agreed on how the negotiations would proceed. Specifically, they agreed in 1998 at San José that the FTAA would be a single undertaking, meaning that the agreement would be completed and implemented as one whole unit instead of in parts. Ministers also agreed that the FTAA could coexist with other subregional agreements, like Mercosur and NAFTA, to the extent that the rights and obligations go beyond or are not covered by the FTAA. An eventual FTAA agreement would contain three basic components: (1) chapters on general issues and the overall architecture of the FTAA and its institutions, (2) schedules for reducing tariff and nontariff barriers, and (3) chapters on specific topics. The specific topics currently under negotiation include (1) market access for goods, (2) investment, (3) services, (4) government procurement, (5) dispute settlement, (6) subsidies/antidumping/ countervailing duties, (7) agriculture, (8) intellectual property rights, and (9) competition policy. As illustrated in figure 1, FTAA participants formed negotiating groups on each of these topics; agreed on a general mandate for each group; formed special committees on smaller economies, the participation of civil society, and electronic commerce; and determined that the negotiations would be led by a vice-ministerial-level Trade Negotiations Committee. Chairmanship of the negotiations changes every 18 months, with Argentina serving as the current chair, to be succeeded by Ecuador for the next round of negotiations following the April meetings. Brazil and the United States are set to co-chair the final round from November 2002 to December 2004. Ministers set out the workplans for the negotiating process and select new chairs for the negotiating groups in the same 18- month increments. Since the 1998 launch of the negotiations, the nine FTAA negotiating groups have met the ministerial goals set for them of producing first drafts of their respective chapters, which contain the agreement’s detailed rules. As illustrated in figure 2, negotiators were directed by ministers in November 1999 to submit first drafts of their chapters to the Trade Negotiations Committee by December 2000, using annotated outlines developed in the previous phase as frames of reference. According to FTAA participants and other observers, these were ambitious goals, and working-level activity since 1998 has been fairly intense in order to meet them. They stated that merely providing the first drafts of the chapters marks important progress, as the drafts are necessary groundwork for future negotiations. Under FTAA negotiating procedures, individual countries may still propose new text to be included in the draft chapters; the removal of brackets and text can only be done by consensus. Third Summit Quebec City, According to U.S. and foreign negotiators, however, the draft text is heavily bracketed, indicating that agreement on specific language has not been reached. The draft text generally represents a consolidation of all proposals submitted by FTAA countries so far. FTAA participants state that the draft conveys wide differences between the countries over substance and philosophical approaches to key issues. The Trade Negotiations Committee is currently in the process of assembling a report that will be provided to trade ministers at the upcoming Buenos Aires Ministerial on April 7. In addition to making progress on producing the first drafts of the chapters, the negotiations have yielded several other accomplishments. Ministers agreed to adopt eight customs-related business facilitation measures (for example, expediting express shipments) and 10 additional transparency (openness) measures (for example, posting tariff and trade flows to the FTAA website) at the Toronto Ministerial in 1999. U.S. officials report that the FTAA countries immediately began to implement all 10 transparency measures and are in various stages of carrying out the customs measures. Outside of the concrete accomplishments, many observers feel the negotiations have greatly improved coordination and provided a broader understanding of trade and its impacts among FTAA countries, in part through technical assistance in the form of reports, databases, seminars, and financial assistance provided by the Inter- American Development Bank, the Organization of American States, and the United Nations Economic Commission for Latin America and the Caribbean. A number of challenges must be overcome in order to successfully complete the FTAA. For example, to build on the technical foundation of the first years of negotiations, much work remains to be done in three areas: setting the agreement’s detailed rules, deciding on the market access concessions, and devising the institutional structure to implement the completed agreement. However, negotiators have not yet begun to bargain on the agreement’s detailed rules or market access concessions, and vice-ministers have not begun to formulate the agreement’s institutional structure. Negotiators will conduct their work in an environment filled with challenges, due to the complex and controversial character of some of the issues, and the diverse nature and fluid political and economic condition of the participants. Many observers believe these challenges will be resolved only if the governments demonstrate their commitment to the agreement’s completion. In order to conclude the FTAA, the negotiating groups will first need to begin negotiating on the removal of the brackets that signify disagreement in the text on the agreement’s detailed rules. However, this task will be difficult, because the text deals with controversial and complex issues. For example, agricultural support measures and antidumping provisions are widely understood to be controversial; observers feel that some of the more difficult issues will not be resolved until the deadline for completing the negotiations. Other negotiating groups’ tasks are complex by virtue of the extent of the subject matter to be covered. For example, the market access negotiating group is responsible not only for the elimination of tariffs but also for devising rules of origin, customs procedures, safeguards, and technical barriers to trade. Other negotiating groups’ tasks are complex because they break new ground for many of the FTAA countries. For example, competition policy has not been the subject of a multilateral agreement on which to build, and only two of the FTAA countries are signatories to the multilateral Agreement on Government Procurement. Before countries can begin to negotiate on market access concessions, they must agree on the basic ground rules of the negotiations. Negotiators refer to these as the “modalities.” Once the FTAA participants agree on the modalities, market access liberalization negotiations can begin. Decisions on these procedural matters are especially important for five of the nine negotiating groups: market access, agriculture, government procurement, investment, and services. In addition, some negotiating groups need guidance on whether their groups can share procedural processes. For example, the market access and agriculture groups could have a common approach to tariff reduction starting points or the pace of tariff elimination. Much work remains to be done in order to establish an institutional structure for the implementation of the agreement. This involves such key issues as the role and location of a permanent secretariat and the institutional mechanism by which the participants will oversee implementation of the agreement, including dispute settlement provisions. FTAA experts expect it can only be completed near the end of the negotiation process because the structure is largely dependent on the results of the negotiations. The ministers also need to address administrative issues related to the negotiation process. The final negotiation period will be chaired jointly by the United States and Brazil. However, both U.S. and Brazilian government officials told us that they have not yet determined how a joint chair relationship will function. The very fact that 34 widely differing countries are participating in an endeavor to create a hemispheric free trade zone in itself complicates the process. Since the participants range from some of the world’s largest and most economically powerful to the smallest and most economically disadvantaged, their objectives and incentives for the negotiations naturally differ. For example, the United States seeks broad improvements in trade rules and access, in addition to the lowering of regional tariffs; Brazil is primarily interested in gaining access to certain sectors of the U.S. market in which it faces relatively high barriers; the smaller economy countries are interested in protecting their economies from becoming overwhelmed by the larger ones while securing special treatment in an eventual FTAA; and Mexico has less economic incentive to pursue an FTAA because it already has preferential access to most hemispheric markets through a comprehensive network of free trade arrangements. Finally, several FTAA experts told us that the 2005 deadline has seemed far away to many participants, thus sapping needed momentum from the negotiating process. The FTAA negotiating process is challenging because it requires consensus. Interests of specific individual countries or negotiating blocks may not be ignored even if they are not accepted in their entirety. For example, the United States pressed for the inclusion of labor rights and environment provisions in the FTAA. This proposal was met with steadfast opposition by some FTAA countries, but the United States was ultimately accommodated with the creation of the Committee of Government Representatives on the Participation of Civil Society. The Committee, which is to provide a vehicle for public input on these issues, remains a point of contention for both the United States and some of its FTAA partners. For example, the United States proposed that the Committee release a report containing recommendations based on the first round of public input but was initially blocked from doing so by another FTAA country. Eventually, a compromise was reached, and the Committee issued a summary report of the public input. Another challenge is the varying resource capacity of the FTAA participants. Many of the countries, including most of those with smaller economies, negotiate in blocks, which helps them to pool resources in the negotiations. However, government officials from some FTAA- participating countries told us that they are concerned about the demand placed on their limited budgets and staff. For example, the market access negotiating group, which has a very broad portfolio of issues, was not able to be broken up into more manageable components because of resource capacity limitations. In addition, potential competing trade negotiations could also challenge the FTAA process. For example, several foreign government officials explained that the start of a new round of negotiations at the World Trade Organization (WTO) would require them to choose between the WTO and the FTAA for their most qualified negotiators and experts. The domestic political and economic climate of the participants influences not only their internal policies but also the reaction of the other participants. The recent U.S. election is a good example. FTAA experts told us that uncertainty in the fall of 2000 over how the election would affect the direction of U.S. trade policy impacted the progress of the negotiations. In addition, the United States had not developed its negotiating position for several important issues. Some FTAA experts told us that they believed the United States did not have a mandate to make meaningful concessions on market access, which are, in their view, necessary to complete an agreement. In addition, some experts believe that progress in the FTAA in certain areas such as agriculture is reliant on progress in the WTO. Meanwhile, economic hardship and political uncertainty have made some participants more reluctant to pursue an FTAA. FTAA experts noted that in the future, participating countries could face other distractions that would direct their energies away from the FTAA. This includes increased opposition from groups that have not yet fully mobilized against the FTAA. A number of participants told us that the FTAA could be successfully concluded if the key Western Hemisphere leaders demonstrate that they have the political will to conclude the agreement. However, some observers have concerns about whether this climate currently exists in the two main FTAA countries: the United States and Brazil. In particular, FTAA experts and participants have been closely following the debate within the United States on the overall direction of U.S. trade policy and its implications for the FTAA. Some FTAA participants believe that the United States has been distracted from pursuing trade liberalization because it lacks a domestic consensus on the benefits of trade and the way in which to handle the overlap between trade and labor rights and the environment. Several told us that they believed the absence of trade promotion authority has hampered the process to the extent that other countries have held back making concessions on free trade agreement rules and procedures. Others stated that the primary cost of the President’s lack of trade promotion authority was in giving others an excuse to slow progress. Many observers we consulted believe that trade promotion authority is essential for the next phase of negotiations, particularly completion of the market access concessions. These experts said that the foreign partners will not make significant concessions unless they have credible assurance that the deal will not come undone when submitted to Congress for approval. Concerns also exist about Brazil’s commitment to the FTAA process. Even though Brazil has actively participated in the negotiations, observers have noted that Brazil has appeared reticent to embrace an FTAA, and Brazilian officials admit that Brazil has held back during the negotiations. They explained that this reticence is because they believe the United States is not ready to negotiate on issues of greatest interest to Brazil such as high U.S. tariffs on key Brazilian exports and changes to the U.S.’s antidumping regime. In addition, Brazil’s Foreign Minister recently stated that the FTAA is less of a priority for Brazil than the expansion of Mercosur in South America. The April 2001 meetings of ministers in Buenos Aires and leaders in Quebec City represent a critical juncture in the process. Successful meetings in April could lend fresh momentum and clear direction to the FTAA at an important point in the negotiations. At a minimum, FTAA negotiators need guidance for the next 18 months to proceed. However, while the time allotted to settle numerous outstanding decisions is tight, there has been considerable high-level political activity recently that might improve the chances for a favorable outcome. Both April meetings, but particularly the April Summit of hemispheric leaders, provide an opportunity to inject momentum into the negotiating process at a critical point in the FTAA’s development. Past summits have been used to make major advancements in the FTAA process. For example, the first summit, held in Miami in 1994, resulted in the leaders’ commitment to achieve the vision of an FTAA by 2005. The April Summit will engage President Bush and other newly elected heads of state in the FTAA process and provide an opportunity for all 34 leaders to renew their countries’ political commitment to the FTAA. Doing so at this time is particularly important, because the phase of negotiations where countries set out initial positions is ending. The next phase is expected to narrow the many substantive differences that remain, which will require political direction and support. The April meetings will provide an indication of the U.S.’s and other countries’ willingness to make the effort and tough choices required for the bargaining that lies ahead. The April meetings also represent an opportunity to generate interest in and support of the FTAA within the U.S. Congress, the U.S. business community, and the U.S. public. This support will be crucial if the United States is to provide the forceful leadership many FTAA participants believe is necessary for concluding a deal. It is also required for ultimate approval of an FTAA in Congress and in the U.S. “court of public opinion.” Until recently, congressional interest in the FTAA has been limited, and business support has been muted, according to both business and government officials. The April meetings could highlight the importance attached by hemispheric leaders to an FTAA and provide reasons for optimism about its potential viability. The political boost FTAA supporters hope to achieve in April depends, in part, on the meetings’ success in addressing key questions about how negotiations will proceed. These decisions will set the pace, goals, and structure for the next phase of negotiations, since the ministers typically set out the agenda for the next phase of the process at the ministerial meeting. As shown in figure 3, specific direction needs to be provided for the remainder of the negotiations. At a practical level, the negotiators are seeking specific direction as follows: 1. The additional work to be done in refining the rules and disciplines contained in the draft texts, such as removing the brackets that currently signify disagreement. 2. The date for deciding on how negotiations on specific market access commitments will be negotiated. 3. General and institutional provisions of an FTAA. 4. The chairs of the various groups and committees for the next 18 months, and whether to create new committees or groups. However, these practical decisions may be affected by broader issues. For example, Chile has floated the idea of moving up the target date for completion of the negotiations to December 31, 2003, with a final agreement entering into effect on January 1, 2005. This idea of accelerating negotiations is still being debated within and among FTAA governments and may be actively discussed at the April meetings. Some FTAA participants, notably Brazil, have publicly stated that a 2003 deadline is unrealistic. Others believe that a 2003 deadline is both doable and desirable. Decisions made at the April meetings could affect public input into and support for the next phase of FTAA negotiations. For example, trade ministers are expected to consider adopting additional business facilitation measures. In addition, whether and how to respond to the input from civil society groups must be decided. U.S. groups that submitted formal input to the FTAA Committee of Government Representatives of Civil Society told us they are disappointed because there is little evidence that their input is being given serious consideration in FTAA negotiations. Some U.S. government officials we interviewed concurred with this assessment. Others said that U.S. negotiators are considering the input, as are some foreign negotiators. The United States is seeking a more in-depth report on civil society views this year and an expansion of public outreach efforts in future years. In addition, Canada, more than 50 Members of Congress, and various U.S. nongovernmental groups are calling for public release of the bracketed text. Publicly available information on the FTAA negotiations is limited, a fact that has caused suspicion and concern among the nongovernmental groups. These groups see the release of the text as an important confidence-building measure in its own right and as concrete evidence of ministers’ commitment to transparency in decision-making. However, this is likely to prove controversial among FTAA governments in April, given the ongoing and confidential nature of FTAA deliberations. The issue of transparency is also controversial domestically. U.S. negotiators note that releasing the text could hamper their flexibility in exploring creative options to obtain their objectives. Even though the U.S. government released public summaries of U.S. negotiating positions in the FTAA in late January, it faces a lawsuit by two environmental groups seeking access to the full text of U.S. proposals. A large number of issues remain to be resolved between now and the conclusion of the April meetings. When vice-ministers met in January 2001 to prepare for the April meetings, their discussions focused on solving controversies associated with the bracketed text. They spent less time discussing other decisions required in April, or resolving issues, such as whether more business facilitation measures are practical. In addition, the vice-ministers could not schedule an anticipated follow-up planning meeting. As a result, FTAA countries will be forced to tackle their ambitious agenda for April in a very short time frame. Only 4 days of official meetings have been scheduled, and these immediately precede the Ministerial. Expected protests by opponents of the FTAA may complicate the situation further. The United States has faced unique constraints in preparing for the Buenos Aires Ministerial. The new U.S. administration has yet to decide its position on key issues, such as whether to support a 2003 deadline for completing FTAA negotiations, and public release of the bracketed text. In addition, Robert Zoellick, the chief U.S. trade negotiator, was sworn in as U.S. Trade Representative on February 7, just 2 months before the Buenos Aires Ministerial. While significant work remains to be completed for the April meetings, there has been considerable high-level political activity that might improve the chance for a favorable outcome. The new U.S. administration has initiated a number of high-level contacts between President Bush and key hemispheric leaders in advance of the Quebec Summit of the Americas. Already, President Bush has met Mexican President Vincente Fox, Canadian Prime Minister Chrétien, Colombian President Pastrana, and Salvadorean President Flores. Meetings with Brazilian President Cardoso, Chilean President Lagos, and Argentinian President de la Rua have been announced. Among other things, the meetings are intended to establish personal rapport and to reassure these leaders of President Bush’s intention to make the region a priority and to conclude the FTAA. The President’s Trade Policy Agenda released in early March underlines these ideas, as well as the President’s seriousness in securing trade promotion authority from Congress to implement an FTAA. These statements, and others like it, may help the administration establish political support for the decisions required to start the next phase of FTAA negotiations on a solid footing. We obtained oral comments on a draft of this report from the U.S. Trade Representative’s Director for the Free Trade Area of the Americas. USTR generally agreed with the information in the report and provided technical comments that we incorporated as appropriate. To meet our objectives of (1) discussing what progress has been made in the free trade negotiations to date, (2) identifying the challenges that must be overcome to complete a free trade agreement, and (3) discussing the importance of the April meetings of trade ministers and national leaders of the participating countries, we reviewed FTAA and executive branch documents and related literature and economic literature, and held discussions with lead U.S. government negotiators for each of the FTAA negotiating groups. We also had discussions with foreign government officials representing the negotiating blocks, and from officials with the Inter-American Development Bank, the Organization of American States, and the United Nations Economic Commission for Latin America and the Caribbean, who each provide technical assistance to the negotiations. In addition, we met with experts on the FTAA and international trade negotiations, and business and civil society groups that have expressed interest in the FTAA process. This report is also based on our past and ongoing work on Western Hemisphere trade liberalization. We conducted our work from September 2000 through March 2001 in accordance with generally accepted government auditing standards. As you requested, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after its issue date. At that time, we will send copies to appropriate congressional Committees and to the Honorable Robert Zoellick, U.S. Trade Representative. Copies will be made available to others upon request. If you or your staff have any questions about this report, please contact me on (202) 512-4128. Other GAO contacts and staff acknowledgments are listed in appendix II. The 34 FTAA countries include some of the U.S.’s largest trading partners and some of its smallest. Many of them are members of regional trade groups or free trade agreements. Figure 4 shows the countries of the FTAA region and some of the regional trade groups. Table 1 shows the U.S. trade and investment relationship with the 33 other FTAA countries, organized by regional trade groups. In addition to the persons named above, Tim Wedding, Jody Woods, Ernie Jackson, and Rona Mendelsohn made key contributions to this report. | The negotiations to establish a Free Trade Area of the Americas (FTAA), which would eliminate tariffs and create common trade and investment rules within the 34 democratic nations of the Western Hemisphere, are among the most significant ongoing multilateral trade negotiations for the United States. Two meetings held in April 2001 offer opportunities to inject momentum and set an ambitious pace for the next, more difficult phase of the negotiations. Because of the significance of the FTAA initiative, this report (1) discusses the progress that has been made in the free trade negotiations so far, (2) identifies the challenges that must be overcome to complete a free trade agreement, and (3) discusses the importance of the April meetings of trade ministers and national leaders of participating countries. GAO found that the FTAA negotiations have met the goals and deadlines set by trade ministers. Significant challenges remain, including market access concessions and doubts that key Western Hemisphere leaders will have the political will to embrace the agreement. The April meetings of trade ministers will serve as a transition from the initial proposal phase to the substantive negotiations phase. |
In response to concerns about the crash of TWA Flight 800, the White House Commission on Aviation Safety and Security was established in July 1996 to look, first, at the changing security threat and how the United States can address it and then to examine safety and air traffic control issues in the aviation industry and how the government should address them. In September 1996, the Commission issued its initial report, which contained 20 recommendations to enhance the security of air travel. The Federal Aviation Reauthorization Act of 1996, enacted in October 1996, mandated some actions similar to the Commission’s recommendations. In February 1997, the Commission issued its final report, which contained a total of 57 recommendations that focused on improving aviation safety, making air traffic control safer and more efficient, improving security for travelers, and responding to aviation disasters. Because our report deals exclusively with aviation security, it discusses only the 31 recommendations for improving aviation security contained in the Commission’s report and related legislative mandates authorized under the Reauthorization Act. (See app. I for lists of the aviation security recommendations in the Commission’s final report and the aviation security mandates in the 1996 Reauthorization Act.) As agreed with your office, we selected 8 of the 31 aviation security recommendations for detailed review—3 of which were scheduled for completion in fiscal year 1997 and 5 of which are similar to mandates in the Reauthorization Act. FAA is responsible for implementing 21 of the 31 aviation security recommendations in the Commission’s report. Eight other federal agencies have the lead role for 1 or more of the remaining 10 recommendations. FAA is responsible for implementing most of the aviation security mandates in the Reauthorization Act. Each of the agencies responsible for implementing the Commission’s recommendations has established its own tracking method. These tracking methods vary, as do the agencies’ updating practices. No single agency is responsible for monitoring all of the agencies’ implementation efforts and ensuring coordination of interagency issues. FAA’s Office of Aviation Policy and Plans developed a computerized system to track and monitor the status of all 57 recommendations contained in the Commission’s report. This system is incorporated into the agency’s local area network computer system and, for each recommendation, was designed to provide data on the responsible lead agency, the subtasks needed to implement the recommendation, the target dates, and the status of the recommendation’s implementation. Currently, FAA tracks only the recommendations for which it has the lead responsibility. In the fall of 1997, FAA stopped tracking recommendations for which other agencies have the lead responsibility because, according to an FAA official, it did not have control over the work of other agencies. The ability to change data in the tracking system is controlled by the Office of Aviation Policy and Plans, Planning Analysis Division; however, anyone within FAA who has access to the agency’s local area network computer system can view the data. An FAA official told us that FAA does not validate the data after they have been entered into the system. The manager who oversees the system stated that his office attempts to update information at least once a month. Our review of the data indicates that not all recommendations for which FAA has the lead responsibility are updated monthly. FAA’s Office of the Chief Counsel is responsible for tracking and monitoring the progress of all legislative requirements. Because the Reauthorization Act contained many mandates, the Office of the Chief Counsel established a computerized database; however, this database is not linked to FAA’s local area network computer system. This system is accessible only to the Office of the Chief Counsel, and data are entered by a legislative analyst responsible for tracking the mandates’ implementation. The mandates cover a range of issues from acquisition management to regulatory reform and, in some cases, require specific reports to the Congress. According to the responsible legislative analyst, she updates the system approximately every 1 to 2 months and the Office takes appropriate actions. For example, if a mandated report to the Congress is late on the basis of information provided by the relevant program office, the Office will follow up to make sure that the Congress is informed of the delay. Agencies other than FAA are responsible for implementing 10 of the 31 aviation security recommendations contained in the Commission’s final report. The following agencies are responsible for one recommendation each: the Department of Defense, the Department of State, the Federal Bureau of Investigation, the National Transportation Safety Board, and the Department of Transportation’s (DOT) Office of the Secretary. The U.S. Postal Service is responsible for one recommendation and shares responsibility with the U.S. Customs Service (Customs) for another. The Bureau of Alcohol, Tobacco and Firearms (ATF) is responsible for the remaining three recommendations. Because most of these agencies are responsible for only one recommendation, they have not established a computerized tracking system as FAA has done. Instead, they track and monitor their progress while performing routine activities, obtaining and reporting information when requested by officials in their own agency, FAA, or DOT’s Office of the Secretary. For example, FBI officials use reporting mechanisms maintained by the agency’s budget office to track the deployment of additional staff for work on aviation security, as called for in the Commission’s recommendations. Similarly, Department of Defense officials told us that they use meetings and existing internal reporting systems to track the activities of the four working groups that are implementing the Department’s recommendation. While many agencies are involved in implementing the Commission’s recommendations, no single entity has overall responsibility for managing their implementation and coordinating issues between agencies. In March 1997, after the Commission issued its final report, DOT convened an interagency meeting to discuss lead and supporting roles related to the Commission’s recommendations. The agencies represented at that meeting included DOT’s Office of the Secretary, FAA, and the other agencies that might be responsible for implementation. No agency assumed responsibility for following up to ascertain whether agencies were fulfilling the lead and supporting responsibilities discussed during the interagency meeting. Using the tracking system it has developed, FAA prepares reports as requested by FAA’s and DOT’s management to summarize its progress in implementing the Commission’s recommendations. As of March 9, 1998, managers from FAA and DOT’s Office of the Secretary had reviewed the agency’s progress at 10 meetings with the FAA Administrator and other top-level FAA and DOT officials. In addition, as of that date, DOT’s Office of Intelligence and Security had prepared four quarterly reports for the National Security Council’s review. The reports had been requested by the National Security Council as part of its counter-terrorism responsibilities. A National Security Council official said that the Council is more interested in learning of any security weaknesses than in tracking the status of the recommendations’ implementation. For example, this official said that the Council has been concerned because no funds were provided for explosives detection equipment in FAA’s fiscal year 1998 budget. In addition to these reports, the Secretary of Transportation is directed in the Commission’s report to prepare an annual report on the status of implementing all 57 of the recommendations contained in the report. According to the first annual report, which was issued to the Vice President on February 12, 1998, 8 of the 31 recommendations dealing with aviation security had been completed. Furthermore, the annual report noted that FAA and the other federal agencies are continuing to make progress on most of the remaining 23 recommendations. Although a number of the recommendations discussed in the annual report are similar to mandates contained in the Reauthorization Act, the report does not jointly discuss the recommendations and the mandates or the related progress associated with both. An issue that we found in the absence of oversight and coordination of interagency issues is a disagreement between Customs and the Postal Service about one Commission recommendation (see app. I, table I.1, recommendation 3.4) and its implementation through a legislative proposal that would allow Customs to search, without a warrant, domestic mail handled by the U.S. Postal Service that is destined for international locations. Currently, Customs has the authority to search without a warrant for explosives and other threat objects on inbound international mail and cargo. Customs has led this recommendation’s implementation, stating that it has worked with the Departments of the Treasury and Justice. However, Customs has not worked with the Postal Service, the other agency designated at the interagency meeting as the co-lead for implementing this recommendation. To implement this recommendation, Customs has proposed a provision in the administration’s draft International Crime Control Act to give it authority to search outbound international mail without a warrant. Customs has met with the Office of Management and Budget (OMB) to coordinate the proposal through the legislative process. As of April 8, 1998, the proposal was being reviewed by OMB. This proposed authority would parallel Customs’ current law enforcement authority, which generally allows Customs to search persons and property entering or leaving the country. Customs officials also told us they already have authority to examine private companies’ inbound and outbound express mail but do not have authority to search U.S. Postal Service’s outbound international mail. The Postal Service has long opposed such authority for searching outbound international mail as contrary to its authority to protect the mail from unwarranted searches. Postal Service officials said that Customs neither consulted with them nor provided a copy of the legislative proposal for their review. Customs officials confirmed that they had not consulted with the Postal Service on this proposal but stated that they had consulted a number of times over the last 12 years on similar proposals. Postal officials stated that there is no consensus between Customs and the Postal Service on this issue. Several years ago, the Postal Service established an aviation mail security program for domestic and outbound international mail in cooperation with FAA. This program is deployed systemwide in postal facilities. Postal Service officials told us that Customs’ proposal would duplicate their efforts to screen outbound international mail and could delay the delivery of time-sensitive mail. As to the possible delays, Customs officials told us that their examination of inbound U.S. mail and private companies’ outbound express mail has caused little or no delay. According to a Customs official, there are no reasons why the mail would be delayed if the proposal were enacted. However, Postal Service officials stated that the inbound mail program, from their perspective, has not been problem-free and that the volume of outbound mail is significantly higher than inbound, which could cause delays. According to a Customs official, the agency and the Postal Service have disagreed over Customs’ authority to examine outbound international mail for at least 12 years. Neither Customs nor the Postal Service officials we contacted knew whether there was a focal point for coordinating the Commission’s recommendations. A Customs official said that no one has attempted to mediate the opposing positions between Customs and the Postal Service and that legislative action appears to be the only way to resolve it. However, the Postal Service believes that there is no demonstrated aviation security need for warrantless outbound search authority and that, therefore, the legislation is neither needed nor appropriate in connection with this issue. Postal Service officials told us that they have expressed their views to OMB to persuade it not to proceed with the legislation. According to Postal Service officials, OMB told them that it would take their comments and views under advisement. FAA planned to implement three of the Commission’s aviation security recommendations in fiscal year 1997; it fully implemented one of them. The agency has substantially implemented the second recommendation; progress is being made on the third, although it has fallen 15 months behind schedule. The recommendations FAA targeted for completion in fiscal year 1997 built on existing programs and airport relationships and did not require the development and deployment of complex technologies. The purpose of these recommendations was to quickly enhance the capabilities of airport and air carrier personnel in identifying and addressing risks. Table 1 identifies the three recommendations that FAA expected to complete in fiscal year 1997. FAA implemented the first recommendation by providing security clearances and granting access to classified information to airport and airline officials. However, a majority of the airport officials we met with questioned the need for clearances, since they believed that the classified information they received was not useful and timely. FAA officials stated that they have attempted to get the maximum amount of information out in unclassified form by obtaining declassified versions of originally classified information from the originating intelligence agencies so that it could be shared with all airport and air carrier security officials whether or not they had security clearances. FAA has substantially completed the recommendation to ensure that passengers are positively identified when boarding an aircraft; however, delays have occurred that prevent FAA from considering this recommendation as completed (see fig. 1). Over the past several years, FAA has issued a series of security directives designed to positively identify ticketed passengers and subject them to security procedures. For example, one of the directives establishes procedures for positively identifying passengers by requiring them to provide a valid form of identification at the check-in counter. However, FAA has not yet incorporated these directives and other procedures, as it had planned, into the air carriers’ security program—the Air Carrier Standard Security Program (ACSSP). Amending the ACSSP is taking longer than expected because FAA received many significant comments from air carriers on proposed changes and had to obtain a second round of comments on a revised proposal. FAA also decided to wait until it had addressed some technological issues, such as computer-assisted passenger screening procedures, before completing the ACSSP. FAA has fallen over 10 months behind its initial completion date of July 31, 1997, and now plans to complete this recommendation by May 1998. (months) (months) 9 10 11 12 1 Similarly, although consortia—a partnership between airport and air carrier officials and law enforcement agencies to review security issues—were formed at 41 airports in 1996, shortly after the Commission issued its initial report, FAA has not expanded the voluntary consortia program called for in the Commission’s final report. FAA cannot issue new guidance for consortia until it has determined whether airports and air carriers will be subject to penalties when consortia self-disclose security violations. Air carrier and airport officials told us that they do not want to disclose security violations unless they have some assurance that they will not be penalized. FAA’s Office of the Chief Counsel is still examining this issue and expects to issue a ruling in April 1998. FAA plans to issue the guidance shortly after the legal ruling. As figure 1 shows, FAA postponed issuing its guidance on consortia for 12 months and extended its date for fully implementing the recommendation—to establish consortia at another 200 airports. The new completion date, December 1998, is 15 months later than originally planned. Once FAA has implemented these two recommendations, air carriers will need to follow the revised ACSSP, and airports will need to decide if they want to establish consortia. (See app. II for a more detailed discussion of each of these recommendations.) FAA is making progress on the five recommendations we reviewed that were both recommended by the Commission and mandated by the Reauthorization Act but has encountered delays of up to 12 months. Table 2 lists the five recommendations we reviewed. While these recommendations, such as developing computer-assisted passenger profiling and automated passenger-bag match systems, are critical for improving security, their implementation is taking longer than initially planned because they involve new and relatively untested technologies. In addition, FAA must develop regulations that set forth the requirements for these recommendations. After it has completed the regulations, others must carry out the requirements. Therefore, full implementation cannot occur until airports, air carriers, and screening companies have established programs that meet the new regulatory requirements. Before the Commission issued its final report, FAA started working with air carrier and airport officials and with private companies to resolve the technological issues underlying the implementation of the five recommendations. These recommendations are interrelated. For example, computer-assisted passenger profiling can identify passengers who should be subjected to additional screening procedures, which could include physical searches of bags, examination of bags by explosives detection equipment, and matching of bags to passengers when they board the aircraft. Many of the dates to complete the recommendations were, according to FAA officials, ambitious because of, among other reasons, the technological complexities associated with these recommendations and the time needed to proceed through the regulatory process. FAA officials told us that a number of the milestones for completing the recommendations were initially established by the Commission, the Secretary of Transportation, or FAA on the basis of their best estimates of the efforts required to implement them. As FAA officials have gained experience, they have revised the milestones to take into account the complexities and time-consuming activities associated with the recommendations. As figure 2 illustrates, FAA extended the completion dates for most recommendations we reviewed. FAA has made progress in implementing the five recommendations we reviewed, but it has not met its target dates because, among other reasons, implementation involved relatively new and untested technologies. The following briefly discusses the status of each of the five recommendations we reviewed and the actions that FAA and others need to take before they can be fully implemented. (See app. III for more details on the status of these recommendations; implementation issues; and, where applicable, observations we made during field visits to airports.) On the basis of the Commission’s recommendation for implementing automated passenger profiling, FAA developed a computer-assisted passenger screening (CAPS) system that enables air carriers to more quickly separate passengers into two categories—those who do not require additional security attention and those who do. This automated screening permits the carriers to focus on the small percentage of passengers who may pose a security risk and whose bags should be screened by explosives detection equipment or matched with the boarding passenger. Northwest Airlines began to develop a CAPS system with funding from FAA in 1994. According to FAA’s original plan, all air carriers would have had a CAPS system in place by December 31, 1997. No air carriers met this implementation date. Northwest Airlines, however, had this system in place the following month and, in February 1998, two other major air carriers implemented the system. Most of the other major carriers are either testing the system or still integrating it into their reservation systems. FAA also needs to issue a regulation governing this system. FAA’s revised completion date for implementation by all but one of the other major carriers is September 1998—9 months past the original implementation date. To facilitate implementation, FAA set aside funds to subsidize air carriers’ costs of integrating CAPS into their reservation systems. The Congress provided $144.2 million in the Omnibus Consolidated Appropriations Act of 1997 for the purchase of commercially available advanced security screening equipment for checked and carry-on baggage. FAA planned to deploy 54 certified explosives detection systems to screen checked bags and 489 trace detection devices to screen passengers’ carry-on bags at major airports by December 1997; however, it did not meet this goal. As of March 10, 1998, FAA had deployed 13 certified explosives detection systems and, as of January 9, 1998, 125 trace detection devices. FAA plans to have all 54 certified systems and another 22 noncertified devices for screening checked bags, along with 489 trace detection devices for screening carry-on bags, installed and operational by December 1998. Thus, by the time FAA completes this recommendation, it will be a year behind schedule in achieving the increased security for checked and carry-on bags that these funds supported. FAA’s deployment of the explosives detection equipment was delayed for a number of reasons. According to FAA officials, they extended the time period to install the equipment because the agency did not receive funding for additional equipment in fiscal year 1998. Also, they said, ongoing or planned construction at certain airports impeded the installation of equipment. In addition, several air carrier officials and an equipment company representative told us that delays occurred because the company installing the equipment to screen checked bags was inexperienced. Some screening staff told us that they were not always prepared to operate the equipment when it was installed. Before the Commission’s reports were issued, FAA began examining the feasibility of matching bags with passengers to ensure that the baggage of anyone who does not board a plane is removed. FAA completed a pilot program at selected airports in June 1997. The Reauthorization Act required FAA to report on this pilot program to the Congress within 30 days after its completion. FAA planned to send a report on the program’s operational effects to the Congress by July 31, 1997. FAA also planned to complete an economic analysis of the impact of matching passengers and bags systemwide in September 1997. At the urging of the airline industry, FAA agreed to combine these reports and issue one report by December 31, 1997. FAA advised the Congress of this delay. This report to the Congress is now expected to be issued by June 30, 1998—almost a year later than required by the Reauthorization Act. According to FAA, some passengers and bags are being matched for domestic flights using a manual profiling system. In addition, during January and February 1998, three air carriers began matching bags to passengers selected for additional security measures through their CAPS system. According to several of the air carrier officials we spoke with who had participated in the pilot passenger-bag match program, they would not be able to match all passengers with their bags for every flight because too many delays would occur. They said that they would not object to a passenger-bag match program based on a CAPS system that would limit the number of passengers and bags to be matched. FAA has three separate efforts under way to implement the various recommendations involving vulnerability assessments. First, to conduct vulnerability assessments and develop action plans, as the Commission recommended, FAA is developing a standardized model for conducting airport vulnerability assessments. FAA is working with several companies that are using different vulnerability assessment models at 14 major airports. These assessments began in January 1998 and are to be completed by August 1998. FAA has established a panel to review the results and select the best model for assessing a facility’s vulnerabilities. FAA plans to make this model available to those who have responsibility for performing assessments, including FAA inspectors, airports, air carriers, and consortia, to meet the various requirements for conducting assessments and identifying vulnerabilities at individual airports. FAA plans to have this model available in March 1999. Although some delays have occurred in starting these assessments, they have not been significant. The delays occurred in the course of soliciting and awarding contracts to six firms and the Department of the Navy, which will conduct the assessments. FAA has requested $2 million in its fiscal year 1999 budget to perform additional assessments at other airports. Second, to address the requirement for joint threat and vulnerability assessments under the Reauthorization Act, FAA and FBI conducted their first assessment in December 1997 and began conducting one to two each month starting in February 1998. These assessments differ from the above effort to develop a model because the results of the joint assessments will be used for comparing threat and vulnerabilities at different airports. By having both threat and vulnerability information, FAA and FBI should be able to determine which airports and areas of airports present the highest risks. Initially, FAA selected a pool of 72 airports, which account for 92 percent of commercial travelers in the United States, as candidates for the joint assessments. In January 1998, FAA and FBI agreed to a schedule for assessing 31 high-risk candidates by the end of calendar year 1999 from the pool of 72 airports. Under the Reauthorization Act, the initial assessments are to be completed by October 9, 1999. According to the schedule for the joint vulnerability assessments, FAA and FBI plan to complete their reviews at 28 of the 31 airports by this date. However, an FAA official acknowledged that as the agencies gain experience in conducting these assessments, they may be able to conduct more per month than scheduled. Third, the Reauthorization Act mandates that FAA require airports and air carriers to conduct periodic vulnerability assessments. FAA plans to implement this requirement through a security program change rather than through the rulemaking process. Airports and air carriers will have to incorporate this requirement into their individual security programs. However, before implementing this change, FAA said, it intends to make the standardized model it is currently developing available to both airports and air carriers for use in conducting these assessments. According to the Director of the Office of Civil Aviation Security Policy and Planning, FAA expects the model to be available in March 1999 and the required implementation of the assessments to begin around mid-1999. Certifying the companies that air carriers contract with to provide security at airport security check points would ensure that these companies meet established standards and consistent qualifications. FAA issued an Advance Notice of Proposed Rulemaking in March 1997 for certifying screening companies and expected to complete the final regulation in March 1999, well ahead of its original target date of December 1999; however, FAA later changed this date to March 2000 to allow additional time for developing performance standards based on screener performance data. Several screening company officials we spoke with said that certification was a good idea; others had no comment. Improving the training and testing of people hired by these companies to screen passengers’ baggage at airport security checkpoints would also improve aviation security. Currently, the people who are hired to screen baggage attend a standardized classroom training program, but FAA believes that the use of a computerized, self-paced training program would have benefits. FAA began developing such a computerized training and testing system, called the Screener Proficiency Evaluation and Reporting System (SPEARS), well before the Commission issued its initial report and the Reauthorization Act was enacted. As of February 1998, FAA had deployed computer-based training systems for personnel who use X-ray machines for screening carry-on bags at 17 major airports. Deployment is planned for two additional major airports by May 1998. FAA has also awarded a contract to deploy these systems at another 60 airports. As of March 11, 1998, FAA had decided to deploy only 15 of the 60 training systems because it lacked necessary funding. If funds are available, FAA plans to deploy the other 45 systems by the end of fiscal year 1998 or early fiscal year 1999. The screening companies we spoke with responded favorably to the computer-based training program. A second computer-based training program for the only certified explosives detection system used to screen checked bags will not be deployed until after FAA validates the training program, the company that developed the training program reaches an agreement on the licensing of the program with the manufacturer of the certified system, and funding becomes available. Another computerized system, the Threat Image Projection system, also known as TIP, which is used to test screeners’ effectiveness, is in the process of being deployed. FAA began deploying this testing system during the week of March 23, 1998, for use by the certified explosives detection systems that are currently in place. FAA also plans to deploy 284 of these testing systems for use with X-ray devices used for screening carry-on bags at major airports starting in April 1998. Data from these systems will be used to develop performance standards that FAA plans to incorporate into the regulation for certifying screening companies. FAA or others need to take additional actions before these five recommendations can be completed. FAA is currently evaluating new security technologies. It has also begun the rulemaking process for several recommendations. After FAA completes the evaluations and rulemaking, air carriers, airports, and screening companies will need to implement the requirements for programs, such as passenger-bag match and the certification of screening companies. Therefore, full implementation of the recommendations should not be expected immediately after FAA completes its work. (App. III contains a detailed description of the implementation issues associated with each recommendation.) FAA needs to evaluate several pilot programs that are associated with specific recommendations. For example, the deployment of explosives detection equipment involves several evaluations. First, FAA needs to learn more about how well the certified equipment works in the field, as well as what issues airports confront in installing the equipment, so that it can decide on future deployment strategies for screening checked baggage. FAA’s recently completed evaluation of trace detection equipment for carry-on baggage will guide FAA’s purchase of the remaining pieces of equipment. Finally, the effective use of equipment in an airport environment depends on the effectiveness of the personnel using it. Currently, two different methods are being used to train personnel who screen baggage at security checkpoints: the traditional classroom training and the new computer-based training program. FAA plans to compare the results of the computer-based training, a pilot program, with the currently used classroom training program. FAA must also validate the computer-based training program for the certified explosives detection system before the program can be pilot-tested. FAA must analyze the results of the various models being used by contractors to assess the vulnerability of airports. FAA plans to complete this analysis, which will include a review by an expert panel, by the end of calendar year 1998. As of March 9, 1998, FAA expected the model to be available for use by March 1999. FAA will also need to complete its economic analysis of matching passengers and bags before it can issue the required report to the Congress. The Commission envisioned a federal investment of approximately $100 million annually to enhance aviation security. The President’s 1999 budget requested $100 million to continue the implementation of explosives detection devices as recommended by the Commission. Several air carrier and screening company officials have expressed concerns about who will pay to maintain the equipment and to upgrade the software as improvements are made. FAA needs to complete two rulemakings, now scheduled for completion in December 1998 and March 2000. Some of the rulemaking depends on information obtained in the evaluations. Rulemaking is a multistep process that results in the issuance of final regulations for implementing programs. The rulemaking process may begin with an Advance Notice of Proposed Rulemaking. This notice, which FAA has issued as a first step in developing a regulation for certifying screening companies, solicits information from affected parties, such as air carriers, airports, and screening companies. Next, FAA must analyze this information and use it to develop a proposed regulation (called a Notice of Proposed Rulemaking), which it then publishes for comment. On the basis of the comments it receives, FAA then revises the proposed regulation, obtains clearance from OMB, and issues the regulation. FAA must issue a regulation within 16 months of the final day of the public comment period on a Notice of Proposed Rulemaking. If the process includes an Advance Notice of Proposed Rulemaking, FAA must issue a final rule within 24 months of when the Notice of Proposed Rulemaking is published. The entire process, including the drafting of the notice—whether it includes an advance notice or a notice—can take several years for complex issues. Regulations are planned for three recommendations—the automated passenger profiling and the automated passenger-bag match, both of which are being addressed under the same regulation, and the screening company certification and screener training. FAA has changed the completion date for issuing the final regulation for certifying screening companies from March 1999 to March 2000. According to FAA officials, they need the extra time to gather data from the TIP systems to develop and incorporate standards for screener’s performance into the final regulation. In addition, the regulatory process will take time, since screening companies have not previously been regulated by the federal government and screening company representatives have expressed an interest in how these regulations will affect their operations. Some air carriers have also expressed concerns about how a regulation on matching bags and passengers might be structured because its implementation could delay flights. Figure 3 shows FAA’s progress in completing the rulemaking process for these recommendations. Although air carriers, airports, and screening companies are taking some steps to implement the recommendations, full implementation will not occur until after FAA has issued various regulations. For example, air carriers and their reservation companies will have to develop and implement a CAPS system and a passenger-bag match program based on an automated passenger profiling system in accordance with FAA’s regulation. FAA plans to issue the regulation by December 1998. As discussed earlier, some air carriers have already voluntarily implemented both of these actions and others expect to do so before the regulation is issued. Screening companies will have to apply for certification and meet various requirements after FAA issues its regulation. Thus, the recommendations will not be fully implemented until some time after FAA completes its actions. Each of the agencies responsible for implementing the Commission’s recommendations has established its own tracking methods. This decentralized approach has generally been adequate to track and monitor the Commission’s recommendations. Although the Office of the Secretary of Transportation provides quarterly reports to the National Security Council and annual reports to the Office of the Vice President on the implementation of all 57 of the Commission’s recommendations, no single federal agency is responsible for tracking, monitoring, and coordinating the activities associated with implementing the recommendations. Consequently, issues that arise between agencies may go unresolved. For one such recommendation—Customs’ authority to search outbound international mail without a warrant—Customs is proceeding to implement the recommendation by developing legislation to secure this authority. However, the Postal Service strongly opposes such authority being granted to Customs. The Reauthorization Act requires specific reports, such as the report to the Congress that was due 30 days after the completion of the pilot program for passenger-bag match. The act does not require a comprehensive report—comparable to the Secretary of Transportation’s annual report required by the Commission—on FAA’s progress in implementing the act’s aviation security mandates. Because the Congress enacted these mandates and provides funds for implementing both the mandates and some of the Commission’s recommendations, it has an interest in FAA’s progress. If the scope of the annual report that the Office of the Secretary of Transportation currently provides to the Office of the Vice President were broadened to include information on FAA’s progress in implementing the Reauthorization Act’s mandates, that expanded report could provide the Congress with additional information for budgetary and programmatic oversight. FAA is making progress in implementing the eight recommendations we reviewed but has encountered some delays and extended some completion dates. Given that these recommendations involve new technologies, require FAA to follow time-consuming rulemaking processes, and require the aviation industry to take action, further delays are possible. To have relevant information for budgetary and programmatic oversight, the Congress may wish to require the Secretary of Transportation to provide it with an annual report that combines both the federal agencies’ progress in implementing the Commission’s recommendations, as contained in the Secretary of Transportation’s annual report, and FAA’s progress in implementing the Reauthorization Act’s aviation security mandates. We provided copies of a draft of this report to the Department of Transportation (DOT) and the Federal Aviation Administration (FAA) for their review and comment. We met with DOT and FAA officials, including FAA’s Associate Administrator for Civil Aviation Security, its Director of the Office of Civil Aviation Security Policy and Planning, and its Deputy Director of the Office of Civil Aviation Security Operations to obtain their comments. DOT and FAA generally agreed with the information in our report and provided technical corrections, which were incorporated into the report where appropriate. However, they disagreed with one issue. During our review, FAA officials told us that FAA did not plan to initiate a rulemaking that would require airports and air carriers to conduct periodic vulnerability assessments as mandated under the Reauthorization Act. Instead, FAA planned to let consortia, where formed, decide whether they wish to conduct the assessments. However, FAA’s Director of the Office of Civil Aviation Security Policy and Planning stated that FAA will require these assessments by changing airports’ and air carriers’ security programs instead of going through the rulemaking process. As a result, we have deleted our recommendation that FAA either implement the requirement as mandated by the Congress or inform the Congress of the agency’s intention to deviate from the law’s requirements and seek a legislative remedy. We also provided copies of the draft to the Departments of Defense, State, Treasury, and Justice; the National Transportation Safety Board; the Postal Service; and the National Security Council. Except for the Department of the Treasury and the U.S. Postal Service, the other agencies provided comments on our draft which did not require any change to our report. In its comments (see app. V), the Department of the Treasury states that our discussion of the dispute between Customs and the Postal Service should be deleted because it does not address our objective of determining how federal agencies responsible for implementing aviation security recommendations track, monitor, and coordinate their activities. We disagree and believe that the discussion is germane to the issue of coordination because the Postal Service was designated as a co-lead on implementing this recommendation. According to Customs officials, they did not consult with the Postal Service in drafting the proposed legislative section that would grant authority to Customs to search outbound international mail. Treasury also states that our report is misleading by suggesting that the “disagreement” between Customs and the Postal Service remains open. Treasury assumes that because the Commission has made a recommendation, the differences between Customs and the Postal Service are resolved and that, therefore, GAO has inappropriately characterized the status of the recommendation. We disagree. We believe our report characterizes the situation as it currently exists, that is, the disagreement remains open because the Postal Service opposes the recommendation. In its comments on a draft of this report (see app. VI), the Postal Service expressed its continued opposition to recommendation 3.4, which would grant Customs the authority to search outbound international mail, and presented a number of concerns it has about the implementation of this recommendation. We recognize that the Department of the Treasury and the Postal Service have opposing views on the recommendation. Our report does not take a position on these views but acknowledges that disagreement continues to exist. Regardless of the positions taken by either agency, it is our obligation to inform the Congress on issues that could affect its deliberations involving legislative matters that come before it. Where appropriate, we have clarified our report on the basis of the Department of the Treasury’s and the Postal Service’s comments. In determining how federal agencies track, monitor, and coordinate activities for implementing the Commission’s recommendations and the Reauthorization Act’s mandates, we secured and analyzed various status reports generated by FAA. For the other agencies, we acquired and analyzed data supporting their activities. We supplemented these reports and data through discussions with agency officials. On the basis of discussions with your offices, we analyzed the 31 security recommendations that resulted in the selection of 8 recommendations for review—3 that were due to be completed in fiscal year 1997 and another 5 that are similar to mandates contained in the Reauthorization Act. To determine the progress made in implementing these recommendations and the issues remaining to be addressed before full implementation can occur, we held discussions with FAA officials at headquarters and in the field. We also held discussions on the same topics with airport, air carrier, and screening company officials at seven airports. (See app. IV for further details on our scope and methodology.) We performed our work from June 1997 through March 1998 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days after the date of this letter. At that time, we will send copies to the cognizant congressional committees; the Office of the Vice President; the Secretary of Transportation; the Administrator of the Federal Aviation Administration; the Secretary of Defense; the Secretary of State; the Director of the Federal Bureau of Investigation; the Chairman of the National Transportation Safety Board; the Commissioner of the U.S. Customs Service; the Director of the Bureau of Alcohol, Tobacco and Firearms; and the Postmaster General of the U.S. Postal Service. We will also make copies available to others on request. Please call me at (202) 512-2834 if you or your staff have any questions. Major contributors to this report are listed in appendix VII. Table I.1: Aviation Security Recommendations Contained in the White House Commission’s Report on Aviation Safety and Security, With Designated Lead Agency The federal government should consider aviation security as a national security issue and provide substantial funding for capital improvements. FAA should establish federally mandated standards for security enhancements. The Postal Service should advise customers that all packages weighing over 16 ounces will be subject to examination for explosives and other threat objects in order to move by air. Current law should be amended to clarify the U.S. Customs Service’s authority to search outbound international mail. The FAA should implement a comprehensive plan to address the threat of explosives and other threat objects in cargo and work with industry to develop new initiatives in this area. The FAA should establish a security system that will provide a high level of protection for all aviation information systems. The FAA should work with airlines and airport consortia to ensure that all passengers are positively identified and subjected to security procedures before they board aircraft. Submit a proposed resolution, through the U.S. Representative, that the International Civil Aviation Organization begin a program to verify and improve compliance with international security standards. Assess the possible use of chemical and biological weapons as tools of terrorism. The FAA should work with industry to develop a national program to increase the professionalism of the aviation security work force, including screening personnel. Access to airport controlled areas must be secured and the physical security of aircraft must be ensured. Establish consortia at all commercial airports to implement enhancements to aviation safety and security. Conduct airport vulnerability assessments and develop action plans. Require criminal background checks and FBI fingerprint checks for all screeners, and all airport and airline employees with access to secure areas. Deploy existing technology. Establish a joint government-industry research and development program. Establish an interagency task force to assess the potential use of surface-to-air missiles against commercial aircraft. Significantly expand the use of bomb-sniffing dogs. Complement technology with automated passenger profiling. Certify screening companies and improve screener performance. Aggressively test existing security systems. Use the Customs Service to enhance security. Give properly cleared airline and airport security personnel access to the classified information they need to know. Begin implementation of full bag-passenger match. (continued) Provide more compassionate and effective assistance to families of victims. Improve passenger manifests. Significantly increase the number of FBI agents assigned to counter terrorism investigations, to improve intelligence, and to crisis response. Provide anti-terrorism assistance in the form of airport security training to countries where there are airports served by airlines flying to the U.S. Resolve outstanding issues relating to explosive taggants and require their use. Provide regular, comprehensive explosives detection training programs for foreign, federal, state, and local law enforcement, as well as FAA and airline personnel. Create a central clearinghouse within the government to provide information on explosives crime. These recommendations were contained in the initial report of the White House Commission on Aviation Safety and Security. The remaining 11 recommendations were added to the Commission’s final report dated February 12, 1997. FAA is to submit a report, including proposed legislation, if necessary, to the Congress, no later than 90 days after the enactment of this Act, on responsibilities and sources of funding for airport security. FAA is to certify screening companies and to improve training and testing of security screeners through development of uniform training standards. FAA shall enter into an arrangement with the National Academy of Sciences to assess available weapons and explosives detection technologies and identify the most promising technologies for the improvement of the efficiency and cost-effectiveness of weapons and explosive detection. FAA shall require criminal history checks for individuals who will be responsible for screening passengers or property, their supervisors, and other individuals who exercise security functions associated with baggage or cargo, as the FAA Administrator determines necessary. Facilitate the interim deployment of commercially available explosives detection equipment that will enhance aviation security significantly while FAA is in the process of certifying commercially available equipment. FAA shall provide for the periodic audit of the effectiveness of criminal history record checks. FAA, DOT, intelligence community, and law enforcement community assist carriers in developing a computer-assisted passenger profiling system. Provides authority to use Airport Improvement Program funds to enhance and ensure the safety and security of passengers and other persons involved in air travel. FAA and the FBI shall provide for the establishment of an aviation security liaison in or near cities served by a designated high-risk airport. FAA and FBI shall carry out joint threat and vulnerability assessments on security at high-risk airports every 3 years or more often if necessary. If bag match pilot program is carried out, FAA is required to submit a report on safety, effectiveness, and operational effectiveness to the Congress. Air carriers and airports will conduct periodic vulnerability assessments of the security systems, and FAA shall perform periodic audits of such assessments. The Secretary of Transportation shall, no later than 90 days after enactment of this Act, transmit to the Congress a report on any changes recommended and implemented as a result of the Commission’s report to enhance and supplement screening and inspection of cargo, mail, and company-shipped materials transported in air commerce. The following provides additional information on the status of three of the Commission’s recommendations, FAA’s implementation issues, and our observations during field visits to airports. Give properly cleared airline and airport security personnel access to classified information they need to know. FAA considers this recommendation completed. FAA has been providing clearances and classified information to airport officials since June 1994. Although air carriers have had cleared personnel under other clearance programs going back into the 1980s, it was not until the mid-1990s that FAA began providing air carriers with clearances. FAA officials told us that in March 1997, FAA invited airports and air carriers to recommend personnel for clearances. As of March 1998, 234 airport, air carrier, and law enforcement personnel had been granted security clearances under this program with an additional 35 pending. FAA also provides declassified security information to all airport and air carrier personnel whether cleared or not. FAA’s position is to ensure that as much information as possible is given to the industry. FAA considers this recommendation completed with the issuance of the March 1997 invitation. Obtaining clearances is voluntary, and FAA does not have any legal authority to require airports or air carriers to have persons with a clearance. A majority of the airport officials we met with questioned the usefulness and timeliness of any classified information they have received; therefore, some of these officials do not see a need for clearances. Many of the airport managers we met with told us that they have local sources for obtaining security-related information that they believe is more useful than FAA’s; an FAA official told us that local law enforcement officials may provide very useful information and that FAA does not believe that it is the sole source of security-related information. Five officials from different airports told us they had applied for clearances but had not been told whether their clearances have been granted. FAA has followed on up these inquiries and most have been resolved. FAA should work with airline and airport consortia to ensure that all passengers are positively identified and subjected to security procedures before they board aircraft. FAA has used security directives to establish security procedures for passengers. Incorporating those procedures into the Air Carrier Standard Security Program (ACSSP) is taking longer than expected. Many of the procedures to clear passengers have been in place since the early 1990s under security directives issued by FAA. The only action left to complete this recommendation is to incorporate the directives into the ACSSP. On March 28, 1997, FAA issued a proposed change to the ACSSP incorporating these and other security directives and originally planned to complete this process by July 31, 1997. Delays have resulted because FAA received many significant comments on its proposal to incorporate these changes into the ACSSP. FAA revised and reissued the proposal in August 1997 for a second round of comments, extending the comment period to October 1997. FAA extended its completion date to May 1998. FAA needs to analyze the comments received on its proposed amendments to the ACSSP and revise the ACSSP before it can go through the agency’s internal review. While completing the change to the ACSSP, FAA plans to consider other ongoing recommendations, such as automated passenger profiling, passenger-bag match, and the use of explosives detection equipment. Others, such as airports and air carriers, will need to implement any changes once FAA completes its process. The ACSSP will need to be further amended when an automated passenger profiling system replaces the manual process. FAA field and air carrier officials have expressed concern that FAA has operated its security program through security directives instead of having a current ACSSP in place. The aviation industry would like to see the ACSSP revised as quickly as possible because FAA has issued so many security directives over the last several years. FAA field and air carrier officials stated that security directives are not always clear and may lack detailed information for implementation. Establish consortia at all commercial airports to implement enhancements to aviation safety and security. Completion has been delayed while FAA resolves a legal issue and issues guidance on consortia. Under the Commission’s initial report, FAA helped establish 41 consortia at major airports in the fall of 1996. These 41 consortia have conducted vulnerability assessments and developed action plans; some airports have addressed their action plans and are awaiting further guidance from FAA. FAA plans to encourage the establishment of consortia at 200 more airports. FAA originally estimated that the airports that wanted to voluntarily establish consortia would do so by September 30, 1997. To encourage the establishment of consortia, FAA planned to issue guidance in May 1997. FAA cannot proceed with guidance until its Office of the Chief Counsel rules on whether airports and air carriers will be subject to penalties when the consortia self-disclose security violations. FAA’s Office of the Chief Counsel expects to issue its ruling in April 1998 and FAA plans to issue the guidance shortly thereafter. FAA now estimates that the additional consortia should be established by December 1998. FAA’s Office of the Chief Counsel needs to rule on whether airports and air carriers are exempt from penalties when they self-disclose security violations identified through the activities of a consortia. Once the legal issue is resolved, FAA will need to issue guidelines on the mission, function, activities, and authority of consortia to resolve violations discovered through consortia efforts. FAA does not have any legal basis for requiring airports to establish consortia; participation is voluntary. Airports will have to decide if they want to form consortia after the guidelines are issued. FAA recognizes that persuading airports to form consortia may be difficult because participation is voluntary. Of the seven airports we visited, five had formed consortia in response to the Commission’s initial report. At airports we visited, the consortia’s ongoing activities are mixed. Some consortia continue to be active; others are awaiting future instructions from FAA on how to proceed, given that they have performed vulnerability assessments, drafted action plans, and implemented corrective actions. Two others have ceased operation or merged their activities with other monthly meetings dealing with security issues. Some airport and air carrier officials do not see a need for consortia because they believe their meetings duplicate other airport security meetings. One airport official told us that he will not establish a consortium for this reason unless required by law. Air carrier and airport officials are concerned that they may be held liable if they report violations under the consortia. Airports and air carriers are concerned about the lack of direction from FAA on the activities of consortia and how they should proceed since completing the work under the initial Commission’s recommendation. Although FAA plans to issue guidance for consortia, an FAA official told us that the agency sees its role as providing support to local consortia. The following provides additional information on the status of five aviation security recommendations made by the Commission and authorized under the Reauthorization Act, implementation issues, and our field observations during visits to airports. Complement technology with automated passenger profiling. Assist air carriers in developing computer-assisted passenger profiling programs in conjunction with other security measures and technologies. FAA is developing a standardized model for use in conducting vulnerability assessments. FAA has contracted with six private-sector firms and one federal agency to conduct vulnerability assessments using a variety of models. A total of 14 airports will be covered by these assessments. FAA initially planned to start these assessments in November 1997, but delays in awarding the contracts delayed their start until January 1998. FAA estimates that these assessments will be completed by August 1998. On the basis of these assessments, FAA hopes to develop a standardized model for use by FAA inspectors, airports, air carriers, and consortia for their vulnerability assessments. Using the contractors’ assessments, FAA has established a panel to select a best practices model. FAA and FBI are conducting joint threat and vulnerability assessments at 31 selected airports. FAA-FBI planned to develop protocols for conducting joint threat and vulnerability assessments by April 1997 and begin the assessments in June 1997. The protocols were developed by December 1997 and field tested at four airports by mid-March 1998. FAA-FBI plan to conduct joint threat and vulnerability assessments at 31 airports that the FAA and FBI have designated as high-risk candidates. FAA-FBI plan to conduct one to two assessments per month. FAA-FBI are legally required to complete these joint assessments by October 1999. Currently, FAA plans to complete 28 of 31 assessments by October 1999. FAA plans to begin amending its security program that will require airports and air carriers to conduct vulnerability assessments. FAA plans to recommend that airports and air carriers use the standardized model it is currently developing. FAA expects the model to be available in March 1999 and the implementation of the required assessments to begin around mid-1999. Implementation is dependent on FAA developing a standardized model and amending its security program. Airport officials have expressed concern that too many and possibly duplicative vulnerability assessments have already been done or are being planned. They include (1) the 41 assessments done by consortia under a recommendation in the Commission’s initial report, (2) the contractor’s assessments to develop a standardized model, (3) the joint FAA-FBI assessments, and (4) the legal requirement for airports and air carriers to conduct assessments. Reauthorization Act’s mandate Certify screening companies and improve screeners’ performance. Certify screening companies and improve the training and testing of security screeners through the development of uniform performance standards. FAA has started the rulemaking process to develop regulations for certifying screening companies and plans to complete this recommendation by March 2000. In March 1997, FAA issued an Advance Notice of Proposed Rulemaking soliciting information on certifying screening companies and improving screeners’ training. FAA analyzed the comments received and has prepared a Notice of Proposed Rulemaking with specific regulatory proposals. FAA originally estimated that it would complete the rulemaking process by December 1999. This date has changed twice: once to March 1999 and more recently to March 2000. This change, according to FAA officials is to allow for the inclusion of performance standards for testing screeners. This proposed rule is now undergoing internal review at FAA. After completing the internal review, FAA will need to issue the Notice of Proposed Rulemaking for comment, currently scheduled for March 1999. These comments will have to be analyzed and incorporated into the final regulation. The regulatory process will take time, since screening companies have not previously been regulated by the federal government. After the regulation is completed, screening companies will have to apply for certification. Screening companies will have to implement programs to comply with the new regulation. Air carriers and screening companies believe that certifying screening companies is needed. FAA is currently pilot-testing several training programs designed to enhance screeners’ performance. Efforts to improve screeners’ training had already started when the Commission issued its initial report and the Reauthorization Act was enacted. SPEARS (a computer-based program for training screeners who screen baggage is being deployed and had been installed at 17 airports as of February 1998. Each of the 17 airports has received 12 training units, which are located at a single location within the airport. FAA has contracted to deploy the computer-based training program at another 60 airports; however, FAA has decided to deploy the training program at only 15 of the 60 airports because it lacks the necessary funds. Depending on the availability of funds from its request to reprogram fiscal year 1998 moneys, FAA plans to deploy the training program at the remaining 45 airports by the end of fiscal year 1998 or early fiscal year 1999. The Advance Notice of Proposed Rulemaking for certifying screening companies, issued in March 1997, also solicited input on the methods and curriculum for training screeners. FAA’s evaluation of another computer-based program that will train screeners to use the only FAA-certified explosives detection system, which screens checked bags, has been postponed until a licensing agreement between the system manufacturer and the program developer has been executed. As of March 10, 1998, FAA had deployed the threat image projection system, called TIP, at four airports for testing. TIP is a computerized system used to test screeners’ effectiveness in identifying explosives and other threat objects. FAA began deploying TIP at other major airports during the week of March 23, 1998, for use by the certified explosives detection systems that are currently in place. FAA also plans to deploy 284 of these testing systems for use with X-ray devices used for screening carry-on bags at major airports starting in April 1998. FAA needs to complete the evaluations of its computer-based training program and the threat image projection program. FAA needs to decide where to place the computer-based training equipment in airports. FAA needs to issue clear guidelines on the various training programs being deployed and on the relationship of the new computer-based training programs to the current classroom-type training program, especially in view of the fact that the “older” classroom training program needs updating. FAA will have to acquire funding; await completion of the licensing agreement between the system manufacturer and the program developer; and complete its validation of the computer-based training program for the certified explosives detection system before the computer-based training program can be deployed. Screening companies have received the computer-based training favorably. Not all the air carriers and screening companies we met with had received FAA’s April 1997 computer-based training program guidance. Several screening companies have expressed concern about the lack of clear guidance on using either the computer-based training program or the standardized classroom training program for carry-on bags. Two screening companies refused to send their screeners to the computer-based training location because (1) it is too far from their work location, (2) it takes a considerable amount of time to reach the training site, (3) it is located in another screening company’s work area, and (4) it requires a supervisor to go along with the screeners, leaving them short-handed at check points. Most screening companies suggested placing the equipment in several locations to make it easily accessible to everyone; they said this would require more units at each airport. FAA needs to replace SPEARS equipment that was stolen from one airport. To determine how federal agencies track, monitor, and coordinate activities designed to implement the Commission’s aviation security recommendations and the Reauthorization Act’s mandates, we obtained and analyzed the status reports that FAA’s computerized tracking systems generated between May 1997 and February 1998, as well as the quarterly status reports covering all the Commission’s aviation security recommendations made to federal agencies, which the Department of Transportation’s (DOT) Office of the Secretary compiled and sent to the National Security Council for the same period. We did not independently verify the reliability of FAA’s computerized databases for tracking the status of the Commission’s recommendations and the act’s mandates. However, when appropriate, we did obtain supporting documentation and discuss the accuracy of the data and their related reports with FAA officials on those recommendations we reviewed. We also discussed the procedures for preparing these reports with the responsible offices in FAA and DOT. We met with officials of the departments of Defense and State, FBI, the National Transportation Safety Board, the U.S. Postal Service, U.S. Customs Service, and Bureau of Alcohol, Firearms and Tobacco to obtain information on how they track the recommendations for which they are responsible and obtained and analyzed data supporting the status of their recommendations. We also met with an official of the National Security Council to determine if it had a role in overseeing actions on all the recommendations. We analyzed the 31 aviation security recommendations to determine which ones FAA expected to complete in fiscal year 1997. We reviewed three of the five recommendations in FAA’s tracking system that the reports and other documents targeted for completion in fiscal year 1997. We did not review two other recommendations because one involved an agency other than FAA and the other involved an international security issue. We discussed the status of these recommendations with officials from FAA’s policy and operating offices, analyzed related documents, and discussed their status with airport, air carrier, and screening company officials at the seven airports we visited. To determine the progress FAA had made in implementing the key recommendations that were both recommended by the Commission and mandated by the Congress in 1996 and the major issues that needed to be addressed before these recommendations could be fully implemented, we used the requesters’ criteria to determine which of the Commission’s recommendations and the act’s mandates covered the same issues. We identified seven issues in which the recommendations and mandates were substantially similar. We selected five of the seven for review because of their interrelationships and high visibility in improving aviation security. We discussed the status of these recommendations with officials of FAA’s policy and operating offices and analyzed related documents. We also discussed the status of recommendations with airport, air carrier, and screening company officials at seven airports we visited. We met with headquarters officials of Northwest Airlines to discuss the status of the CAPS system. We selected the seven airports in order to obtain a wide coverage of airports’ and air carriers’ involvement in implementing the recommendations. Five were major airports that had considerable involvement in implementing the recommendations. Visiting these airports enabled us to obtain the views of airport, air carrier, and screening company officials who had experience with implementing the recommendations and to observe the explosives detection and training equipment in place and the operation of that equipment. Two airports were smaller and had no involvement with the recommendations at the time of our field work. Visiting these airports enabled us to obtain the views of airport, air carrier, and screening company officials on recommendations such as obtaining clearances and forming consortia that were voluntary on the airports’ and air carriers’ part, as well as their views on those recommendations that they would eventually be required to implement. Because of the sensitive nature of aviation security and ongoing efforts at specific airport locations, we are not listing the seven airports we visited. J. Michael Bollinger Elizabeth R. Eisenstadt Barry Kime Marnie S. Shaul The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the tracking, monitoring, and coordinating activities undertaken by the agencies responsible for implementing recommendations made by the White House Commission on Aviation Safety and Security and on the Federal Aviation Administration's (FAA) progress in implementing eight of these recommendations, focusing on: (1) how federal agencies responsible for implementing aviation security recommendations track, monitor, and coordinate their activities; (2) the progress that FAA has made in implementing three of the Commission's aviation security recommendations that were scheduled to be completed in fiscal year (FY) 1997 and whether all implementation issues been resolved; and (3) the progress that FAA has made in implementing five recommendations that were both recommended by the Commission and mandated by Congress in 1996, and what major issues need to be addressed before they can be fully implemented. GAO noted that: (1) FAA is responsible for implementing 21 of the Commission's 31 aviation security recommendations and most of the Reauthorization Act's aviation security mandates; (2) FAA developed a computerized system to track its progress in implementing each of the Commission's recommendations; (3) in addition, FAA's Office of the Chief Counsel established a separate computerized system for tracking the activities required of the agency under the Reauthorization Act; (4) eight other federal agencies, each of which is responsible for implementing the Commission's other aviation security recommendations, track progress through the operations of their program and budget offices; (5) although the Department of Transportation's Office of the Secretary provides quarterly reports to the National Security Council and sends an annual report to the Vice President, neither the Security Council nor any other agency is responsible for monitoring all of the agencies' implementation efforts or ensuring coordination between agencies; (6) without such oversight and coordination, issues that arise between agencies may go unresolved; (7) of the three recommendations GAO reviewed that FAA planned to complete in FY 1997, FAA has totally implemented only one; (8) this recommendation--to give properly cleared air carrier and airport security personnel access to classified information they need to know--was completed on schedule; (9) FAA has largely implemented the second recommendation--to establish procedures for identifying passengers before they board an aircraft--through a series of directives, one of which requires that passengers provide a valid form of identification at the check-in counter; (10) the third recommendation--to voluntarily establish a partnership between airport and air carrier officials and law enforcement agencies to implement security enhancements--has not been expanded to an additional 200 airports beyond the 41 established as a result of the Commission's initial report and is 15 months behind schedule; (11) FAA has made progress but encountered delays in implementing the five recommendations made by the Commission and the similar mandates contained in the Reauthorization Act; and (12) these delays have occurred, in large part, because the recommendations involve new technologies and, in some cases, require FAA to issue regulations. |
This section provides general information on how federal agencies carry out BSA responsibilities, what their SAR reporting requirements are, the mechanisms they use to monitor suspicious activity, and law enforcement agencies that use SARs. The Secretary of the Treasury delegated overall authority for enforcement of, and compliance with, BSA and its implementing regulations to the Director of FinCEN. FinCEN’s role is to oversee BSA administration. To fulfill this role, FinCEN develops policy and provides guidance to other agencies, analyzes BSA data for trends and patterns, and pursues enforcement actions when warranted. However, FinCEN also relies on other agencies in implementing the BSA framework. These activities include (1) ensuring compliance with BSA requirements to report suspicious activity and certain financial transactions and taking enforcement actions, when necessary; (2) collecting and storing the reported information; and (3) taking enforcement actions or conducting investigations of criminal financial activity. FinCEN relies on other agencies to conduct examinations to determine compliance with, BSA and its implementing regulations. The Secretary of the Treasury delegated BSA examination authority for depository institutions to five banking regulators—the Federal Reserve, OCC, OTS, FDIC, and NCUA. The federal regulators examine an institution’s policies and procedures for monitoring and detecting suspicious activity as part of their examination programs. Periodic on-site safety and soundness and compliance examinations are conducted to assess an institution’s financial condition, policies and procedures, adherence to BSA regulations (for example, filing of SARs and other BSA-related reports), and compliance with other laws and regulations. These examinations generally are conducted every 12 to 18 months at small-to-midsized depository institutions (such as community banks, midsize banks, savings associations, and credit unions) on the basis of the regulator’s rating of the institution’s risk. At large complex banking organizations and large banks, federal regulators conduct examinations on a continuous basis in cycles of 12 to 18 months. Banking regulators use SARs in their scoping for these examinations. Depository institutions file SARs and other BSA reports with FinCEN. Under a long-standing cooperative arrangement with FinCEN, IRS’s Enterprise Computing Center–Detroit serves as the central point of collection and storage of these data. The center maintains the infrastructure needed to collect the reports, convert paper and magnetic tape submissions to electronic media, and correct errors in submitted forms through correspondence with filers. IRS investigators and other authorized officials access the data system directly through IRS’s intranet site in what is known as the Web Currency and Banking Retrieval System (WebCBRS). FinCEN controls non-IRS law enforcement users’ access to BSA data in WebCBRS through a portal called Secure Outreach. Federal regulators and FinCEN can bring formal enforcement actions, including CMPs, against institutions for violations of BSA. For instance, federal regulators and FinCEN may assess a CMP against depository institutions for significant BSA violations, including the failure to file SARs and establish and implement an AML program that conforms to federal regulations as required by BSA. Formal enforcement actions generally are used to address cases involving systemic, repeated noncompliance; failure to respond to supervisory warnings; and other violations. However, most cases of BSA noncompliance are corrected within the examination framework through supervisory actions or letters that document the institution’s commitment to take correction action. Whereas FinCEN and the regulators can take a variety of civil actions against depository and other financial institutions, DOJ may bring criminal actions against individuals and corporations, including depository and other financial institutions, for money laundering offenses and certain BSA violations. The actions may result in criminal fines, imprisonment, and forfeiture actions. Institutions and individuals willfully violating BSA and its implementing regulations, and structuring transactions to evade BSA reporting requirements, are subject to criminal fines, prison, or both. DOJ generally identifies institutions violating BSA regulations through criminal investigations of the institutions’ customers. Some corrective actions taken against depository institutions have resulted in guilty pleas and others resulted in deferred prosecution agreements, contingent on the depository institutions’ cooperation and implementation of corrective actions. In each case, the depository institution paid a monetary penalty or was required to forfeit assets, or both. Law enforcement agencies in DOJ and the Department of Homeland Security use SARs in their investigations of money laundering, terrorist financing, and other financial crimes. Entities in DOJ that are involved in efforts to combat money laundering and terrorist financing include FBI; DEA; the Department’s Criminal and National Security Divisions; the Bureau of Alcohol, Tobacco, Firearms, and Explosives; the Executive Office for U.S. Attorneys; and U.S. Attorneys’ Offices. The Secret Service and ICE; (in the Department of Homeland Security) also investigate cases involving money laundering and terrorist activities. IRS-CI uses BSA information to investigate possible cases of money laundering and terrorist financing activities. Federal and multiagency law enforcement teams, which may include state and local law enforcement representatives, also use SAR data to provide additional information about subjects, such as previously unknown addresses; businesses and personal associations; and banking activity during ongoing investigations. Among its provisions, the Annunzio-Wylie Anti-Money Laundering Act (Annunzio-Wylie) amended BSA by authorizing Treasury to require financial institutions to report any suspicious transaction relevant to a possible violation of a law. As authorized by Annunzio-Wylie, FinCEN issued a regulation in 1996 requiring banks and other depository institutions to report, using a SAR form, certain suspicious transactions involving possible violations of law or regulation, including money laundering. During the same year, the federal banking regulators issued regulations requiring all depository institutions to report suspected money laundering, as well as other suspicious activities, using the SAR form. In general, depository institutions are required to file a SAR for suspected insider abuse by an employee; known or suspected violations of law for transactions aggregating $5,000 or more where a suspect can be identified; known or suspected violations of law for transactions aggregating to $25,000 or more regardless of a potential suspect; and potential money laundering or violations of BSA for transactions aggregating to $5,000 or more. The SAR rules require that a SAR be filed no later than 30 calendar days from the date of the initial detection of the suspicious activity, unless no suspect can be identified. If no suspect can be identified, the filing period is extended to 60 days. In addition, banks should report continuing suspicious activity by filing a report at least every 90 days. Depository institutions can file a SAR through the mail or electronically through FinCEN’s BSA E-File program. Depository institutions implement policies, procedures, and systems to monitor for and identify suspicious activity. In addition to following regulations and guidance related to identifying suspicious activities, depository institutions develop monitoring procedures, which typically encompass identification or referrals by employees who conducted the transaction for the customer, manual systems, automated systems, or any combination thereof. Manual monitoring might consist of staff reviewing reports generated by the institution’s management information systems. Large depository institutions that operate in many locations or have a relatively large number of high-risk customers generally use automated account-monitoring systems—computer programs that are developed in- house or purchased from vendors for the purpose of identifying individual transactions, patterns of unusual activity, or deviations from expected activity. In general, these systems capture a wide range of activity, such as deposits, withdrawals, funds transfers, automated clearing house transactions, and automated teller machine transactions directly from the institution’s core data processing system. After identification of unusual activity, depository institution staff conduct additional research to determine whether to file a SAR. (The process is summarized in fig. 1, which also depicts SAR data collection, storage, and access.) The interagency examination manual that the regulators use says that depository institutions are encouraged to document SAR decisions. Additionally, banks must retain copies of SARs including supporting documentation for 5 years from the date of the report. In addition to filing a timely SAR, an institution must notify an appropriate law enforcement authority, such as IRS-CI or FBI, for situations involving violations that require immediate attention. For calendar years 2000 through 2007, SAR filings almost quadrupled. Although depository institutions accounted for the majority of SAR filings, other institutions increased the number of their filings also. Representatives of depository institutions, federal banking regulators, and law enforcement agencies identified a number of factors that, in their view, collectively contributed to the increase in SAR filings. The most frequently cited were technology (in the form of automated monitoring systems) and the effects of public enforcement actions. Representatives also cited an increased awareness of the risks of terrorist financing and other financial crimes after September 11 and improved knowledge of BSA requirements and issues resulting from regulator and institution guidance and training. FinCEN data show that for calendar years 2000 through 2007, SAR filings by depository institutions increased, from approximately 163,000 in 2000 to more than 649,000 in 2007. In 2007, depository institutions filed approximately 52 percent of all SARs. Depository institutions have been subject to SAR-related requirements for a longer period of time than any other financial services industry and they have filed more SARs every year from 2000 through 2007 than other industries (see table 1). The number of SARs filed by depository institutions also increased faster in some years than in others. Our analysis of FinCEN and banking asset data indicated that in 2004 through 2007, the number of SARs filed varied across depository institutions of different asset sizes (see fig. 2) and the variations occurred at different points in time. The largest yearly increase in the number of SARs filed by very large banks and thrifts (those with total assets of $50 billion or more) occurred from 2004 to 2005, whereas the greatest increase in the number of SARs filed by small credit unions (those less than $10 million in total assets) occurred from 2005 to 2006. In 2007, the 31 very large banks and thrifts accounted for almost half (about 44 percent) of SARs filed by depository institutions, although such institutions represented less than 0.5 percent of depository institutions (see fig. 3). In addition, banks and thrifts with total assets from $1 billion up to $50 billion filed more than 30 percent of SARs during the same period. Credit unions of all asset sizes filed less than 10 percent of all SARs filed by depository institutions, despite constituting nearly 35 percent of all depository institutions. Representatives from depository institutions, federal banking regulators, and law enforcement agencies identified a number of factors that, in their view, collectively contributed to the increases in SAR filings by depository institutions from 2000 through 2007. Because of the subjective nature of these factors, the relative influence of individual factors on SAR filing increases cannot be determined. One of the most frequently identified reasons for the increases was the implementation of automated monitoring systems at depository institutions. According to most users of such systems at depository institutions and federal regulator representatives, these systems are capable of identifying significantly more unusual transactions than could be identified manually by institution staff. For example, FinCEN representatives said most institutions have adopted systems that are capable of identifying possible structuring activity—currency transactions carried out in a manner that would avoid the $10,000 threshold that would trigger mandatory currency transaction reporting by depository institutions. Representatives from OCC noted that more sophisticated systems at larger institutions also are capable of incorporating demographic information about the customers and their transaction histories into system alerts of potentially suspicious activity. Depository institution staff use the information in the alerts to assist in their investigations and decide whether to file a SAR. Representatives from various federal agencies and depository institutions we interviewed said that highly publicized enforcement actions taken by the federal banking regulators and FinCEN, and criminal fines by DOJ against systemic BSA noncompliance—some of which included significant SAR failures—also have contributed to the increases in SAR filings. Specifically, they noted that in 2004 FinCEN and OCC concurrently assessed $25 million in CMPs against Riggs Bank for significant and willful BSA violations. In 2005, DOJ announced that Riggs Bank pled guilty to criminal violations of BSA, involving repeated and systemic SAR-related failures. Similarly, representatives noted the 2004 $40 million forfeiture and deferred prosecution agreement into which DOJ entered with AmSouth Bank for SAR failures, and the concurrent assessment by FinCEN and the Federal Reserve of a $10 million CMP against AmSouth Bank to address significant BSA reporting failures and serious weaknesses in BSA compliance policies and procedures. Many of our depository institution interviewees said that the DOJ action against AmSouth Bank and other actions raised concerns in the banking industry that institutions would be targeted routinely for criminal investigation and prosecution for failure to properly implement BSA requirements, such as the failure to file a SAR. However, in past work, we noted that DOJ pursued investigations against a limited number of depository institutions. DOJ officials said that investigations of depository institutions for criminal violations of BSA generally have not involved negligence in reporting a limited number of suspicious transactions. Furthermore, DOJ officials said that depository institutions that have been cited for “one-off” BSA violations generally would not face law enforcement investigation or charges of criminal violation of BSA if they were otherwise had effective BSA compliance programs. Most representatives from depository institutions of varying asset sizes we interviewed said that SARs filed to avoid potential criticism during examinations were referred to as “defensive” filings and also contributed to the increases in SAR filings. Although representatives from most institutions said that filed relatively few SARs that they sometimes filed defensive SARs, representatives from some institutions that filed higher numbers of SARs said their institutions generally did not. We asked Federal Reserve, FDIC, and NCUA officials whether defensive filing was occurring, and they characterized the information as anecdotal. Additionally, officials at FinCEN and OCC said their agencies separately conducted analyses of the practice, and those analyses indicated little evidence of defensive filing. The SAR guidance in the interagency examination manual that regulators use states the decision to file a SAR is inherently subjective and directs examiners to focus on whether the institution has an effective SAR decisionmaking process, rather than on individual SAR filing decisions. According to the manual, in those instances where the institution has an established SAR decisionmaking process; has followed existing policies, procedures, and processes; and has decided not to file a SAR, examiners generally should not criticize the institution for not filing a SAR. The federal banking regulators and FinCEN characterized the issue as less frequently discussed within the banking industry now than earlier in the decade. Furthermore, officials from the federal banking regulators and FinCEN provided varying perspectives on what could be considered defensive SAR filing. According to Federal Reserve officials, SARs filed as a result of the bank’s effort to comply with the 30-day requirement could be considered defensive if, to meet the deadline, depository institutions filed SARs before fully investigating anomalous transactions. According to FinCEN officials, even when the institution is not certain the observed activity is suspicious, an institution’s decision to file fulfills the obligation to report the activity. FinCEN officials said they would not consider it to be defensive filing if an institution erred on the side of caution and filed a complete and accurate SAR, even when the institution was not certain that the observed activity was suspicious. Filing the SAR would fulfill the requirement to report. Federal regulators and depository institution representatives we interviewed generally indicated that the passage of the USA PATRIOT Act in 2001 and issuance of the interagency examination manual likely contributed to increases in SAR filings. According to Federal Reserve officials, the act generally increased awareness among depository institutions of SAR requirements. Representatives from several depository institutions also said that they used the interagency manual to train staff on SAR filing and supporting documentation requirements, and that the manual has helped improve their BSA compliance programs in general. Many depository institution representatives we interviewed said that their SAR filings increased because of their improved BSA compliance programs. FinCEN and law enforcement agencies have taken several steps to improve SAR filings and educate filers about their usefulness in investigations. FinCEN has issued written products that report trends in SAR data, provide tips on filing SARs and present examples of SAR use in law enforcement investigations. It issued guidance to improve the quality of SARs filed. Additionally, FinCEN representatives regularly participated in conferences and outreach events for BSA/AML issues, including events focused on SARs. FinCEN also chairs a group of federal agency and financial industry representatives that discusses BSA administration, including SAR-related issues. Federal law enforcement representatives said they conduct outreach events and work with depository institutions to improve SAR narratives. Since 2000, FinCEN regularly has provided tips about SAR preparation in publications for all financial institutions, including depository institutions. In October 2000, FinCEN first published The SAR Activity Review: Trends, Tips and Issues, which addresses topics related to suspicious activity reporting, trends and analyses regarding BSA data, law enforcement cases assisted by BSA data, and other issues. FinCEN describes this typically semiannual publication as the product of continuing dialogue and close collaboration among the nation’s financial institutions, law enforcement officials, and financial regulators. Its goal is to provide meaningful information about the preparation, use, and value of SARs and other BSA reports filed by financial institutions. Most recently, the publication addressed issues such as how to determine when the 30- day deadline to report suspicious activity begins. According to FinCEN’s annual report for fiscal year 2007, 70 percent of financial institutions that participated in a survey conducted by an external contractor found The SAR Activity Review to be “highly useful.” FinCEN also has posted on its Web site a variety of written guidance documents for depository institutions and other SAR filers to assist them in making the filings more useful to law enforcement agencies. For example, in April 2008, FinCEN posted guidance that addressed SAR filings about the proceeds of foreign corruption. In the guidance, FinCEN directed filers, when appropriate, to include the term “foreign corruption” in their narratives to ensure that law enforcement agencies identify these transactions as soon as possible. In 2007, FinCEN issued guidance regarding 10 of the most common SAR filing errors and ways filers could avoid them. Among other issues, the guidance addressed the importance of explaining why the reported transaction was suspicious, and said that not including an explanation would diminish the usefulness of the SAR to law enforcement and other users. More specifically, FinCEN asserted that most inadequate SAR narratives repeated information from other fields on the form and did not sufficiently describe why the transaction was suspicious in light of the nature and expected activity of the customer. In addition to providing guidance on SAR filing and usefulness, FinCEN representatives regularly participated in outreach events about BSA/AML issues. According to FinCEN, its representatives participated in more than 300 conferences and intergovernmental meetings during fiscal years 2006 through 2008, a number of which focused on SAR-related issues. The Bank Secrecy Act Advisory Group, which FinCEN chairs, and its two SAR- focused subcommittees have served as a forum for industry, regulators, and law enforcement to communicate about how law enforcement uses SARs and other BSA data. The advisory group’s subcommittees facilitate discussion about how record-keeping and reporting requirements can be improved to enhance use and minimize costs to filers. FinCEN officials said they began outreach in 2008 to the largest depository institutions in the country to learn more about how their AML programs function, which they said will enhance their ability to provide industry feedback and ensure that the administration of the BSA regulatory program is based on sound knowledge of industry practices and the challenges of implementing AML programs. FinCEN said it plans to expand this outreach to other industries in 2009. Representatives from federal law enforcement agencies we interviewed said that they conducted outreach events and developed relationships with local depository institutions to improve SAR narratives and alert the institutions to criminal activity the agencies are targeting in investigations. Although representatives of federal and state law enforcement agencies and multiagency teams generally described depository institutions’ SAR narratives as adequate, many described efforts aimed at improving the quality of SAR narratives and establishing relationships with the institutions. For example, according to ICE representatives, more than 100 of their investigators serve as points of contact for financial institutions through ICE’s Cornerstone program, which is intended to develop working partnerships and information-sharing strategies with private industry to target activities of criminal organizations in the financial system. They said that since 2004, ICE has carried out about 4,000 “contacts” or presentations made to the financial services industry through the program. FBI representatives said that in addition to national outreach efforts, field offices have sponsored conferences at their local banks. DEA representatives said that specific outreach efforts at several institutions— intended to assist institutions in assessing their detection and monitoring protocols and improving their SAR narratives—also allowed them to establish relationships with compliance staff and obtain a working knowledge of institutions’ compliance programs. In addition, representatives from most multiagency law enforcement teams we interviewed said that their teams conducted some type of regional or local outreach that included instruction on drafting SAR narrative statements. Representatives from multiple teams noted that regional conferences in their respective areas sponsored by IRS and U.S. Attorneys Offices provided feedback on writing good narrative statements and discussed examples of well- and poorly written narratives. Representatives from one team said they noticed an improvement in the quality of SAR narratives immediately following the events. FinCEN, law enforcement agencies, and financial regulators use SARs in investigations and financial institution examinations and have taken steps in recent years to make better use of them. FinCEN uses SARs to provide a number of public and nonpublic analytical products to law enforcement agencies and depository institution regulators. For example, in 2005, FinCEN agreed to provide several federal law enforcement agencies access to bulk BSA data, including SARs. They combined these data with information from their law enforcement databases to facilitate more complex and comprehensive analyses. In 2000 and again 2003, DOJ issued guidance that encouraged the formation of SAR review teams with federal, state, and local representation. In 2006, DOJ and IRS-CI collaborated on a pilot effort to create task forces and add federal prosecutors to augment SAR review teams in selected districts. The regulators use SARs in their depository institution examination scoping and also review SARs regarding known or suspected unlawful activities by current and former institution-affiliated parties (IAP), including officers, directors, and employees. Although law enforcement agency representatives generally were satisfied with WebCBRS, various agencies and multiagency teams we interviewed said that formatting and other issues related to the data system slowed their downloads and reviews. FinCEN and IRS officials said these and other data management challenges will be addressed as part of FinCEN’s technology modernization plan, developed in collaboration with IRS. FinCEN uses SAR data to provide various types of nonpublic analytical products to federal and state agencies in addition to publicly available reports. Since 2002, FinCEN has combined BSA data with its own data sets to produce reports. In addition to BSA data, FinCEN analysts have access to criminal report information through the National Crime Information Center, law enforcement databases, or FinCEN’s law enforcement agency liaisons. FinCEN also maintains a database of its own proactive casework and its support of other agencies’ investigations. FinCEN analysts also have access to commercial databases that contain identifying information on individuals and businesses. FinCEN has conducted many nonpublic analyses using SAR data, in response to requests from law enforcement agencies. For example, in 2007, FinCEN provided a federal law enforcement agency with a complex, large-scale BSA data analysis about subjects of interest that were identified in SARs filed by depository institutions and other entities. In another example, FinCEN provided a similar analysis to another law enforcement agency on suspicious currency flows between the United States and foreign governments targeted by law enforcement. In 2007, FinCEN also began providing banking departments in the 50 states, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands with nonpublic analyses of SAR data and other selected BSA reports in what are called BSA Data Profiles, which are based on SAR filings throughout the year in their respective state or territory. According to FinCEN’s fiscal year 2008 annual report, it added new content to the 2008 data profiles and plans to continue to provide these to the states annually. FinCEN has issued public analyses using SAR data that identified trends and typologies in the reporting of suspicious activity in key businesses and professions. For example, in 2006 and 2008, FinCEN conducted a self- initiated assessment to identify trends or patterns among SARs about suspected mortgage loan fraud. The SARs on which the 2006 assessment was based reported that suspected mortgage loan fraud in the United States continues to rise, and has risen 35 percent in the past year. The 2006 report stated that SARs included in this assessment reported suspicious activity related to mortgage fraud in all 50 states, the District of Columbia, Puerto Rico, Guam, and American Samoa. Also, in 2008, FinCEN conducted a separate study of suspected money laundering in the residential real estate industry based on SARs. FinCEN provides other types of support to law enforcement agencies. For example, FinCEN provides a full-time analyst to most HIFCAs to help them more effectively analyze SAR data. Representatives from one HIFCA we interviewed said their FinCEN analyst has done analyses of SARs and other data related to their region. FinCEN also provides training and a database template to law enforcement agencies with access to BSA data to help them download and analyze SARs more effectively. In addition, several law enforcement officials we spoke with told us that they receive FinCEN alerts when more than one user has queried its WebCBRS about the same SAR to help them avoid duplicating investigations. Federal law enforcement agencies have taken actions to more effectively analyze SAR data including obtaining access to bulk downloads of BSA data, which they integrate with their own data sets. Different types of team structures have been established to better analyze SARs. According to DOJ, some districts began SAR review teams in the 1990s. In 2006, DOJ and IRS collaborated on a pilot effort to create task forces to pursue SAR- initiated investigations. Tracking of SAR use by law enforcement agencies varies. Federal agencies, separately and in collaboration with other agencies, have taken actions to more effectively analyze SAR data, particularly by better integrating BSA data with other law enforcement data. Beginning in 2004, several federal law enforcement agencies (including FBI, the Secret Service, ICE, and the Organized Crime Drug Enforcement Task Force’s Fusion Center) signed memorandums of understanding with FinCEN that allowed them to obtain access to bulk downloads of SARs and other BSA data. The agencies conduct sophisticated and wide-ranging analyses more readily with the bulk downloads than is possible by accessing the BSA database remotely and querying it for specific records. According to these officials, the analyses they conduct using SAR data and their own data sets further their investigations by enabling them to make links they could not make without access to bulk SAR data. For example: FBI incorporates SARs into its Investigative Data Warehouse, a database that includes 50 different data sets, which facilitates complex analyses. FBI identifies financial patterns associated with money laundering, bank fraud, and other aberrant financial activities. FBI officials told GAO that FBI uses the results from SAR analyses in cross-program investigations of criminal, terrorist, and intelligence networks. In addition, FBI has developed a new tool that allows users in the field to quickly and easily categorize, prioritize, and analyze suspects named in SARs and other available intelligence. Secret Service representatives said their agents use combined data from the bulk downloads and their own repositories with various analytical models to map and track trends in financial crimes. They said the information is being used to model present and future financial crime trends; identify, locate, and link suspects involved in complex criminal cases; and identify financial accounts for asset forfeiture proceedings. ICE has combined BSA data, including SARs, with import and export data for selected countries to help identify and detect discrepancies or anomalies in international commerce that might indicate trade-based money laundering. The Organized Crime Drug Enforcement Task Force’s Fusion Center integrates information from bulk BSA and other law enforcement databases and conducts investigative analyses. Center staff can search the databases of several federal entities at one time rather then relying on individual searches. Users indicated they can easily produce comprehensive integrated intelligence products and charts without having to take independent information from various sources for manual compilation. IRS integrates SARs and other BSA data that it maintains for FinCEN with other information to advance its own investigative efforts. For example, IRS-CI investigators said the agency’s Reveal system integrates BSA, tax, and counterterrorism data and allows them to conduct remote queries to identify financial crimes, including individual and corporate tax frauds, and terrorist activity. Reveal also allows users to sort, group, and export data from multiple information repositories, including combinations of databases, as well as discover and graphically show relationships among entities and patterns in the data. IRS-CI can generate reports from the system that contain names, Social Security numbers, addresses, and other personal information of individuals suspected of financial crimes. Multiagency law enforcement teams also incorporate SAR data into their analyses. IRS and DEA agents at one HIFCA combined resources and said they can now conduct investigative analyses of all SARs in the region within DEA’s Narcotics and Dangerous Drugs Information System. Representatives from another HIFCA also said they analyze criminal activities and SAR filings in those areas known to be problematic, such as a known drug trafficking area. In 2000 and 2003, DOJ issued guidance to encourage the use of SAR data by multiple federal and state law enforcement agencies in what are known as SAR review teams. As of February 2008, the over 80 SAR review teams located across the country vary in level of human capital and other resources. Typically, an IRS agent serving as the coordinator downloads the SARs and prioritizes them for review during a team’s monthly meetings. Some SAR review teams screen SARs against criteria such as the dollar amount involved in the transaction, number of SARs filed on the same subject, pattern for structuring, criminal history of the subject, business the subject may be in, and agency interest. The number of SARs downloaded and reviewed varies across geographical areas. For example, some teams may download and review as many as a thousand SARs per month; and others, 50–100. Coordinators generally told us that although some SARs are not discussed at the meetings and some do not result in investigations, someone from the team reviews all SARs that were filed in their area. Although the downloaded SARs may come from several industries (such as money services businesses, or mortgage lenders), a number of the teams we interviewed said the great majority of the SARs they reviewed came from the depository institutions. Some of the SAR review team representatives we interviewed said they mostly review SARs proactively to generate investigative leads and reactively to support ongoing investigations. According to some DOJ officials, the proactive use of SARs by a team is aimed at initiating a variety of investigations and increasing synergies. Some review team participants also told us a SAR may have more value to law enforcement at a later stage, as more SARs are filed on the same individual. They also said these review groups generally invite representatives from federal law enforcement agencies, financial regulators, U.S. Attorneys’ Offices, local prosecutors, and local police departments to discuss recently filed SARs pertinent to their geographic area. Participants also learn which agencies are interested in following up on information provided in the SARs. Some of the investigations that are the result of SAR review team efforts focused on money laundering, tax evasion, drug trafficking, and mortgage fraud. According to DOJ officials, other goals in developing SAR review teams included reducing duplication of investigative efforts across investigative agencies and increasing the efficient use of resources. DOJ and other agencies also participate in proactive reviews of SARs through the National SAR Review Team. DOJ’s Asset Forfeiture and Money Laundering Section created the National SAR Review Team in May 2007. The national team, which this DOJ section leads, was created to pursue cases that fall outside the scope of a local SAR review team. Representatives from federal law enforcement agencies and FinCEN participate on the national team and meet monthly. According to DOJ, the team and all participants make recommendations on which cases to pursue. The national team reviews SARs that report on activities that are complex and/or multijurisdictional in nature, often involving foreign nationals. According to DOJ representatives, the national team asks FinCEN for assistance on a case-by-case basis, and FinCEN has referred multijurisdictional cases to the team. In 2006, DOJ and IRS collaborated on a pilot effort to create task forces of full-time investigators and added federal prosecutors to work on SAR- initiated investigations. The Attorney General’s Advisory Council identified the districts in which the task forces were to operate. IRS and DOJ also wanted state and local enforcement agencies to be actively involved in this effort because they could present state and local crime perspectives. Some DOJ officials also noted that this multiagency initiative could translate to more synergies and coordination to avoid duplication of efforts. IRS staff in task force districts currently serve on both the task forces and SAR review teams. An IRS representative said that IRS expected that its staff would continue participating in both teams. Further, IRS representatives said the task forces and SAR review teams complemented each other, and maintaining the relationship with SAR review teams was integral to avoiding duplicative investigative efforts. However, the task forces and SAR review teams differ in key respects. IRS staff generally characterized the task forces as more focused than the SAR review teams. According to IRS staff, the task force model lends itself to investigations of BSA violations that have the potential for seizure or forfeiture under BSA, as well as prosecution. IRS staff further noted these types of investigations generally involve BSA violations for which IRS has investigative responsibility—currency and cash structuring, and certain money laundering offenses. According to an IRS-CI official, task forces are able to dedicate more staff and staff time to cases. For example, Treasury’s Executive Office of Asset Forfeiture funds the operating costs for most task force members to work on the task force full time, thereby enabling them to work on more cases and on more complex problems. In contrast, the IRS representative said SAR review team members typically serve on a part-time basis and conduct SAR-related investigations in addition to other responsibilities. FinCEN, IRS, and federal law enforcement agencies and teams track information about SAR data access and how SAR information has been used in investigations in varying degrees. Through its Gateway program, FinCEN tracks the numbers of WebCBRS users’ queries and views of BSA data that are conducted as discrete downloads of individual BSA reports, including SARs. IRS-CI staff access WebCBRS directly through IRS’s intranet. According to IRS staff, IRS provides its users the ability to capture additional details about SAR use through IRS-CI’s case management system, which captures certain information related to investigations and tracks the use and value of BSA information in three ways. First, the system identifies all investigations where the source of the investigation is a SAR (or another BSA document). Second, for all nontax investigations, it may identify what types of BSA documents were of use or value to the investigation. Third, the system tracks all investigations developed with the SAR review teams and their general investigation case numbers. IRS-CI representatives said they also use a program that aids in the review and tracking of team decisions about SARs that were reviewed to avoid duplicative investigations. In general, IRS-CI staff serving on SAR review teams or HIFCAs track which SARs they download for the teams and which agencies are pursuing investigations based on the SARs the team reviewed. Although DOJ does not require SAR review teams to compile statistics about their SAR use, some SAR review team representatives we interviewed said they have plans to track their use of SARs in greater detail. For example, some teams track or have plans to track the number of seizures and indictments associated with the investigations initiated from SARs they have reviewed. Finally, representatives from some of the state and local enforcement agencies we interviewed said they track the number of SARs they reviewed while others said they did not. According to the interagency BSA/AML examination manual the regulators are to assess depository institutions’ SAR compliance during examinations. The regulators conduct periodic on-site examinations to assess an institution’s financial condition, policies and procedures, and adherence to laws and regulations such as BSA. During examinations, examiners download and review SARs as part of their efforts to assess institutions’ (1) suspicious activity monitoring and reporting systems, (2) the decisionmaking process for SAR filings, (3) SAR quality, and (4) assess a bank’s internal controls. For example, examiners conduct transaction testing on samples of downloaded SARs to determine whether institutions’ SAR-related policies, procedures, and processes are adequate and effectively implemented and whether the filed SARs were complete and accurate. In addition to examining depository institutions for compliance with SAR requirements, the regulators track and review SAR information as part of their enforcement actions against institution-affiliated parties (IAP)—that are known or suspected of being involved in unlawful activities and breaches of trust. The Federal Deposit Insurance Act generally allows the federal bank and thrift regulators to suspend, remove, or prohibit IAPs from participating in the affairs of depository institutions or working in the banking industry if the IAP is charged or convicted with certain crimes involving dishonesty, breach of trust, or money laundering. For example, according to federal banking regulator representatives, their agencies generally track and review information from SARs filed by the depository institutions they supervise that indicate suspected abuse by someone inside the institution. Depository institutions are required to file SARs to report insider abuse including all known or suspected criminal activity committed or attempted against the institution. Officials from the Federal Reserve, OCC, FDIC, and OTS said their respective agencies have programs in place to track and review SARs about IAPs. They described how information from these SARs is used as part of efforts to take action against IAPs involved in theft, fraud, and other unlawful activity at the depository institutions. For example, OCC has a Fast Track Enforcement Program that implements streamlined enforcement procedures to be used in specific situations in which there is a conviction of, and admission by, or clear evidence that an IAP has committed a criminal act or other significant acts of wrong doing involving a national bank that are actionable under the OCC’s enforcement authority. The Federal Credit Union Act provides the same enforcement authority to NCUA. NCUA reviews all SARs filed by credit unions on IAPs to determine whether it is appropriate to pursue administrative action to remove or prohibit the person from working in the banking industry or require restitution. Federal, state, and local agencies have experienced some data management challenges when downloading and reviewing SARs and other BSA reports. Although law enforcement agency representatives noted they were generally satisfied with WebCBRS, representatives from various law enforcement agencies and multiagency law enforcement teams we interviewed expressed some specific concerns related to the formatting and the efficiency of downloading of SARs from the database. For example, representatives from some SAR review teams said the SAR data they download through WebCBRS appear in all capital letters and without other formatting, which makes reviewing SARs more difficult and time consuming. Other SAR review team representatives said that another formatting problem arises when filers organize information about transactions and dates within tables included in their SAR narratives; when downloaded from WebCBRS, the tables appear as lines of unformatted information without columns or headings. An IRS-CI official commented that these formatting issues are particularly challenging for law enforcement teams that review large numbers of SARs. Representatives from some SAR review teams and HIFCAs we interviewed said their teams download and review approximately 1,000 or more SARs each month. Data management staff at IRS and FinCEN identified limitations in the mainframe environment from which WebCBRS evolved as the cause of these formatting concerns and noted that SARs appear this way for all WebCBRS users. An IRS data management representative commented that depository institutions and commercial software companies often prepare formatted tables within SAR narratives as part of their AML software packages. The representative noted that WebCBRS is unable to retain such formatting. Representatives from the federal banking regulators and a state banking department we interviewed also described limits on the amount of BSA information that can be downloaded in the examination process. Specifically, they said that during examinations of institutions that file more than 20,000 reports within an examination cycle, examiners are unable to download all of the SARs or other BSA reports in a single download session. According to representatives from the federal banking regulators, examiners at each agency must divide their SAR downloads into multiple batches. Data management staff at FinCEN said the purpose of the 20,000 limit is to prevent users with large download requests from diminishing the speed of the system for other users. Although federal banking regulators have taken steps to deal with these challenges, representatives from these agencies still generally characterized the download process as inefficient because of the additional time needed to conduct separate queries. They also noted that download sessions for SARs and other BSA reports, such as currency transaction reports, sometimes expire before completing the data request. Representatives from FDIC, the Federal Reserve, OTS, and OCC expressed concerns about the quality of data obtained through WebCBRS. FDIC representatives said the inability to download all appropriate SARs in one attempt raises concerns about whether any of the downloads are complete, as well as concerns about the possibility of citing a bank for an apparent violation for failure to file a SAR because that record was not in the information downloaded from WebCBRS. Federal Reserve and OTS representatives cited concerns about the integrity of WebCBRS and whether all SAR and currency transaction report data are properly uploaded. OCC representatives also expressed concerns about the quality of BSA data in WebCBRS. They noted that because of these concerns and data management issues, in 2004, they requested and obtained bulk access to SAR data for the institutions OCC supervises. OCC representatives also said they then spent a significant amount of funds and resources to develop a customized data system to conduct analyses of SARs. FinCEN and IRS officials said these and other data management challenges will be addressed as part of FinCEN’s information technology modernization plan, developed in collaboration with IRS. In response to a recommendation we made in 2006, FinCEN, in collaboration with IRS, is developing a long-term comprehensive plan for re-engineering BSA data management activities. In fiscal year 2007, FinCEN launched an initiative to maximize BSA data quality and value by more consistently identifying, documenting, prioritizing, and addressing BSA data requirements and quality issues. As part of the initiative, FinCEN established a Data Management Council to provide internal and external data users with a clear means of identifying and communicating data issues, requirements, and business priorities; validating resolution of data issues; and jointly establishing priorities for taking data management actions. The council consists of approximately 35 representatives from FinCEN, financial regulators, law enforcement agencies, and IRS. FinCEN officials also said that FinCEN has an Integrated Product team, consisting of FinCEN staff, which developed a strategy for the information technology modernization plan. FinCEN officials expected implementation of the modernization plan to take from 3 to 5 years. According to FinCEN, the team also developed a list of approximately 300 capabilities that are desired in a new system. FinCEN officials also said that team spent 2007 and 2008 focusing on repairing identified problems with the current system, reformulating processes, and working to make the system as effective as possible. FinCEN officials were reluctant to commit to a timeline, as the work will depend on budget allocations and FinCEN’s working relationship with IRS counterparts. FinCEN worked with other agencies in 2006 to create a new SAR form for depository institutions that was not implemented, and a recently developed document outlining a new form revision process appears to address some—but not all—of the collaboration-related problems encountered in 2006. FinCEN and the federal banking regulators issued proposed substantive and formatting revisions to the SAR form in 2006; however, because of technology limitations, the revised form was not implemented. Law enforcement agency officials we interviewed had mixed views on the proposed revisions to the form. They generally supported most of the proposed revisions, but some felt they had been insufficiently consulted and also expressed concerns to us that some revisions could affect their work negatively. We have identified practices that can help enhance agencies’ collaborative efforts such as those needed to revise the SAR form. FinCEN has identified some steps it intends to use to improve collaboration; however, details on the process are limited. For example, the documentation for the new process that we received does not indicate that FinCEN has incorporated practices for agency collaboration, such as defining a common outcome; agreeing on agency or individual roles and responsibilities; and including a mechanism to monitor, evaluate, and report on how the process worked. Although not all of the practices we identified for collaboration are applicable to the forms revision process, if FinCEN implemented such collaboration practices for SAR form revisions, it may achieve greater consensus from all stakeholders. In 2006, FinCEN revised the form that depository institutions use to report suspicious activities, but the revised form still cannot be used because of continuing information technology limitations. In accordance with the Paperwork Reduction Act (PRA) of 1995, FinCEN and the federal banking regulators must periodically renew the SAR form used by depository institutions and seek public comment. Among other things, PRA requires the balancing of two potentially competing purposes: minimizing the paperwork burden on filers and maximizing the utility of the information collected in forms required by the government. To satisfy PRA requirements, FinCEN and other agencies assess the SAR forms approximately every 3 years to determine if revisions should be made. In February 2006, in advance of the form’s expiration, FinCEN and the federal banking regulators issued proposed revisions to and reformatting of the SAR form. An important goal in revising the form was allowing affiliated institutions to jointly file a SAR. FinCEN and the federal banking regulators submitted the proposed revisions to the Office of Management and Budget for approval and published them in the Federal Register for public comment. In June 2006, FinCEN and OCC, OTS, FDIC and NCUA advised the public that the agencies had submitted the proposed revisions to the Office of Management and Budget for approval, summarized the comments received and the disposition of issues raised by respondents, and requested additional comments on the proposed changes. The Federal Reserve issued notice of final approval by the Federal Reserve Board of Governors in a separate Federal Register notice on July 5, 2006. In December 2006, FinCEN announced on its Web site that SAR-filing institutions would begin using the revised form on June 30, 2007. However, in May 2007, FinCEN announced in a Federal Register notice it would postpone implementation of the revised form. In the May 2007 notice, FinCEN identified the cause of the delay as “recently implemented data quality initiatives.” When we discussed the delay with FinCEN officials, they indicated data management staff had identified problems in implementing a BSA data quality management program, which was part of a larger and recently initiated information technology modernization strategy with IRS. FinCEN and IRS agreed to focus on optimizing the current database environment before introducing any new products or procedures. According to a senior FinCEN official, FinCEN thus delayed implementation of the revised SAR to focus on the overall modernization effort. Rather than undertake another revision of the form in 2009 (3 years from the prior revision), FinCEN plans to renew but make no changes to the form the Office of Management and Budget approved in 2006, and direct filers to continue to use the 2003 form. Law enforcement agency representatives we interviewed had mixed views on the proposed revisions to the SAR form. Although they generally supported a key proposed revision, some law enforcement agency representatives we interviewed believed certain proposed revisions could be detrimental to their investigations. Representatives from DOJ, FBI, Secret Service, ICE, the New York HIFCA, and some SAR review teams generally expressed support for the change allowing affiliated institutions to jointly file a SAR (that is, two entities belonging to the same financial organization could file a single SAR for a suspicious activity that affected both). However, representatives from IRS-CI and some HIFCAs and SAR review teams said other revisions could affect their work negatively. One revision causing concern involved replacing the name and title of a person with personal knowledge about the suspicious activity with a contact office. IRS-CI officials, some Assistant U.S. Attorneys, coordinators from other SAR review teams, and HIFCA representatives said the revision might make it more difficult for investigators to reach an individual with personal knowledge of the suspicious activity. However, the Federal Register notice indicated that this action was taken with the approval of the banking agencies and law enforcement as a measure to protect the filer if information from a SAR-Depository Institution was inadvertently disclosed. Similarly, representatives from some SAR review teams and HIFCAs we interviewed expressed concerns about removing the field that SAR filers currently use to indicate they have contacted a law enforcement agency and instead relying on filers to include this information in the SAR narrative. The Federal Register notice indicates this change was being made to simplify the form. Most SAR review team coordinators and HIFCA representatives we interviewed said they use this information to avoid duplicating or jeopardizing ongoing investigations related to the SAR. Furthermore, the process used to revise the form may have contributed to these unresolved differences of opinions about what should be changed on the SAR form and the potential effects of the revisions that were made. FinCEN officials said they developed draft revisions from a running list of recommendations and comments related to suspicious activity reporting from law enforcement investigators and other agencies. Representatives from agencies that have liaisons at FinCEN, including DEA, FBI, ICE, IRS- CI, and the Secret Service, noted they were not involved in identifying the issues or concerns that could be addressed through revisions to the SAR form. According to some law enforcement officials, they did not have an opportunity to provide input at all (for example, SAR review teams), other than providing public comments. When we subsequently asked FinCEN officials about these participation concerns, they indicated that federal law enforcement agency liaisons, whose agencies participate on SAR review teams, had not expressed similar concerns to them and then discussed the process they had used to develop the form and solicit feedback from law enforcement. FinCEN sought and obtained feedback through e-mail from law enforcement agency liaisons stationed at FinCEN. FinCEN officials characterized this feedback to us as not involving any significant objections to the proposed revisions and described it as editorial in nature. FinCEN officials noted they also did not know the extent to which law enforcement agency liaisons sought feedback from staff at the field office level within their respective agencies. FinCEN has developed a new process it intends to use in the future when revising SAR and other forms; however, documentation on the process does not include some collaborative practices. In May 2008, FinCEN developed a new form change management process under the auspices of its Data Management Council. FinCEN indicated the goals of the process include improving implementation of revisions to BSA forms by FinCEN, other agencies, and parties, as well as communication among them. FinCEN provided us with a briefing and some documentation on its new process. FinCEN’s briefing and documentation indicate that FinCEN has begun to address some of the previously identified collaboration-related problems. The information we received generally covered issues such as interactions among external and internal stakeholders, and general steps used to develop and propose form changes. For instance, the early stages of the new process include collaboration with IRS data management staff regarding system applications and other data-related issues. This early involvement could help avoid a repeat of the problems related to implementation of the 2006 revision. Similarly, FinCEN officials said they plan to include a representative for SAR review teams on the Data Management Council. However, neither the briefing nor the documentation provided much detail on some considerations and activities important to such a collaborative effort such as the timeline for completing the various stages in the process; the different roles and responsibilities of the stakeholders in the various stages of the process (for instance, FinCEN has not identified specific council members that would be involved in providing input on proposed changes); or a mechanism to monitor, evaluate, and report on the process. Nor did the documentation reflect collaboration with federal prosecutors. Although FinCEN officials said that they plan to include a representative for SAR review teams on the Data Management Council, the documentation did not indicate collaboration with these teams or other multiagency law enforcement teams, such as HIFCAs. Our prior report on practices that help enhance collaboration emphasizes the usefulness of these missing elements. For example, we noted that to work effectively across agency lines, agency staff ought to define and articulate the common federal outcome or purpose they are seeking to achieve, consistent with their respective agency goals and missions; define and agree on their respective roles and responsibilities, including how the collaborative effort will be led; and have processes to monitor, evaluate, and report on their efforts to enable them to identify areas for improvement. As noted above, FinCEN was unaware of some law enforcement representatives’ concerns about some of the changes to the SAR form in 2006 and bank regulators relied on FinCEN to get law enforcement’s input. This situation indicates that stakeholders in the SAR revision process had not agreed the common outcome they wanted to achieve and that communication and collaboration among SAR form stakeholders might not have been adequate. If FinCEN continues to use the process as it is currently outlined, it may not achieve some potential benefits that could come from closer adherence to practices that can help enhance and sustain collaboration, such as greater consensus from all stakeholders on proposed SAR form revisions, and fuller documentation of the process. The lack of information developed for monitoring and evaluating the process could impede agency management as it seeks to make future improvements to the SAR form and respond to the concerns and needs of both SAR filers and users. The gathering of such information could provide empirical evidence about how well the process worked, what problems occurred, or what issues were identified. Furthermore, more detailed documentation about the process could advance collaborative efforts involving a wide variety of stakeholders by providing all stakeholders with a better understanding of how the process is designed to work, thereby building trust and facilitating communication. The issues associated with the most recent revisions to the SAR form for depository institutions present challenges for FinCEN. They highlight the difficulties of addressing potentially competing objectives stemming from PRA requirements—that new federal forms be designed not only to maximize their usefulness but also minimize burden on filers—and engaging a wide variety of stakeholders. SARs are a key information source for federal, state, and local law enforcement agencies, as well as the federal regulators. Because the information they contain is critical for investigations of money laundering, terrorist financing, and other financial crimes, it is important that the SAR form be designed to collect the information that is most useful for law enforcement. Similarly, federal regulators use them during examinations of depository institutions’ compliance with BSA. Yet given the potential burden of SAR filings, especially for depository institutions—the most frequent filers—it is important the process used to revise the form be a collaborative effort that helps to ensure all stakeholders’ concerns are considered and potential problems identified. While FinCEN and other agencies worked to create and finalize a new SAR form for depository institutions through the PRA, data management issues suspended the implementation of the 2006 revision. Although law enforcement representatives’ views on the revised form were mixed, we found that the process FinCEN used may not have addressed some law enforcement concerns and introduced changes that some law enforcement representatives said could diminish the utility of the form for their investigative purposes. In addition, some law enforcement representatives expressed concerns that they were not involved in the process early. Bank regulators, on the other hand, were satisfied with the proposed changes. Many such problems in multiagency efforts could be mitigated with greater attention to the practices we have outlined for enhancing and sustaining collaboration among federal agencies. Implementation of such practices also may enable law enforcement and regulators to reach greater consensus on proposed changes. However, FinCEN’s documentation for implementing the forms change management process does not necessarily include all law enforcement stakeholders, such as federal prosecutors and multiagency law enforcement teams. Although FinCEN may be able to address some of the issues it encountered in the 2006 revision, FinCEN does not appear to have fully developed a process detailed enough to help ensure such an outcome. It does not provide details on some important considerations (such as the articulation of a common outcome or agreed-upon roles and responsibilities of individuals and agencies at each stage of the process) and omits another critical practice entirely—a mechanism for monitoring, evaluating, and reporting. By better incorporating collaborative practices, such as detailing individual and agency roles and responsibilities and documenting the entire process, FinCEN can further develop a strategy that will improve the SAR form and balance the possibly competing needs of different stakeholders. And, by incorporating mechanisms to document, monitor, evaluate, and report on the process, key decisionmakers within agencies can obtain valuable information and assessments that could improve both policy and operational effectiveness. Finally, by more fully documenting its process, FinCEN likely will enhance its communications and collaboration with stakeholders. To better ensure that future revisions to the SAR form result in changes that can be implemented and balance the differing needs of all stakeholders, we recommend that the Secretary of the Treasury direct the Director of FinCEN to further develop and document its strategy to fully incorporate certain GAO-identified practices to help enhance and sustain collaboration among federal agencies into the form change process and distribute that documentation to all stakeholders. Such practices could include defining and articulating the common federal outcome or purpose they are seeking to achieve; defining and agreeing on their respective roles and responsibilities; and having processes to monitor, evaluate, and report on their efforts to enable them to identify areas for improvement. We provided a draft of this report to the heads of the Departments of Homeland Security, Justice, and the Treasury; the Federal Reserve, FDIC, NCUA, OCC, OTS, and IRS. We received written comments from FinCEN, which are summarized below and reprinted in appendix II. DOJ, FinCEN, the Federal Reserve, FDIC, NCUA, OCC, OTS, and IRS provided technical comments, which we incorporated into this report, where appropriate. The Department of Homeland Security had no comments. Through discussions with FinCEN officials and FinCEN technical comments, FinCEN provided us with additional information showing that it had begun developing a strategy that incorporated certain GAO- identified practices to enhance and sustain collaboration, but that it was not yet complete. As a result, we modified the recommendation language in our draft report to reflect the work that FinCEN already had done. In written comments on this report the FinCEN director said he generally agreed with our recommendation and that FinCEN recognized the need to work with a diverse range of stakeholders to revise BSA forms, including regulatory, law enforcement, and intelligence agencies, as well as financial industries responsible for filing BSA reports. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. At that time we will send copies to interested congressional parties, Treasury, FinCEN, FDIC, the Federal Reserve, OCC, OTS, NCUA, IRS, DOJ, and the Department of Homeland Security. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or you staff have questions about this report, please contact me at (202) 512-8678 or edwardsj@gao.gov. GAO staff who made major contributions to this report are listed in appendix III. This report examines (1) the underlying factors that affected the number of suspicious activity reports (SAR) filed by depository institutions from 2000 through 2007, (2) actions that federal agencies have taken to improve the usefulness of SARs for law enforcement, (3) ways in which federal agencies use SARs and actions they have taken to make better use of them, and (4) whether the process the Financial Crimes Enforcement Network (FinCEN) uses to revise SAR forms is effective in assuring that the information collected is appropriate for law enforcement needs. As agreed with the requesters’ offices, we focused our data gathering and analyses largely on depository institutions and the SARs they file. In some instances, we considered, analyzed, and reported on information from other types of financial institutions. Additionally, our quantitative analyses were limited to 2004 through 2007 to minimize the likelihood that the presented information would be out-of-date. To examine the increase in depository institutions’ SAR filings, we reviewed published findings that FinCEN supplied, as well as obtained and reviewed statistics and related information from the banking regulators. FinCEN also provided us with SAR data for calendar years 2000 through 2007 so we could conduct independent quantitative analyses. We then combined that information with another set of information (such as amount of assets) for specific institutions that we obtained from the Federal Reserve and the National Credit Union Administration. We took multiple steps to assess the reliability of the data. We asked bank regulators’ information technology staff to answer a data reliability questionnaire (for example, about data cleaning and maintenance procedures). We found the data to be sufficiently reliable for the purposes of our report. To address the second part of the first objective, we interviewed many types of stakeholder and obtained agency documents from the interviewees to identify factors that may have contributed to the increase in the number of SARs filed from calendar year 2000 through 2007. Because of the subjective nature of this type of information, we based our findings on the most frequently cited factors. The types of people interviewed are identified in table 2. Representatives from depository institutions constituted another type of interviewee. As part of the process to select the depository institutions, we grouped the depository institutions into four categories, depending on the number of SARs filed in calendar year 2007. We interviewed representatives from all 5 institutions that had the largest number of SAR filings in 2007 as well as representatives from 15 randomly selected institutions. The 15 institutions represented different categories of SAR filings: small (0-5 SARs filed in 2007), medium (6-17), and large (176 or more—excluding the 5 largest). To identify the actions that federal agencies have taken to improve the usefulness of SARs for law enforcement, we interviewed officials from FinCEN, federal law enforcement agencies, and IRS and reviewed agency documents, as indicated for objective 2 in table 2. To examine the ways in which federal agencies use SARs and actions they have taken to make better use of them, we contacted representatives of the various law enforcement groups that are indicated for objective 3 in table 2. For example, federal prosecutors at U.S. Attorneys’ Offices as well as federal law enforcement officials involved in the national SAR review team were some of the types of individuals who provided information. Among the issues that we discussed with the law enforcement agencies were how SAR review teams function and the results of their collaborative efforts. We obtained information from IRS about SAR review teams and interviewed representatives from 13 randomly selected teams. We reviewed reports from GAO, FinCEN, and other governmental agencies to glean additional actions. We obtained information from the IRS that indicated the frequency with which law enforcement agencies accessed SAR information and interviewed representatives from 8 randomly selected state and local law enforcement agencies. All five federal regulators and some state banking agencies also provided information on how SARs are used in compliance examinations, and one regulator provided us with a demonstration of how the system is accessed and the display of the information in the system. To assess whether the process FinCEN uses is effective in assuring that SAR forms are appropriate for law enforcement needs, we conducted legal analysis related to the Paperwork Reduction Act of 1995 and reviewed relevant Federal Register Notices. We also reviewed comment letters about proposed revisions to the SAR form submitted during the public comment period. We interviewed FinCEN, federal law enforcement, and bank regulatory representatives about the process to revise the form. Finally, we discussed the new forms change management process with FinCEN representatives. We conducted this performance audit from July 2007 through February 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Barbara I. Keller (Assistant Director); Toni Gillich; M’Baye Diagne; Natalie Maddox; John W. Mingus, Jr.; Marc Molino; Carl Ramirez; Linda Rego; and Barbara Roesmann made key contributions to this report. | To assist law enforcement agencies in their efforts to combat money laundering, terrorist financing, and other financial crimes, the Bank Secrecy Act (BSA) requires financial institutions to file suspicious activity reports (SAR) to inform the federal government of transactions related to possible violations of law or regulation. Depository institutions have been concerned about the resources required to file SARs and the extent to which SARs are used. GAO was asked to examine (1) factors affecting the number of SARs filed, (2) actions agencies have taken to improve the usefulness of SARs, (3) federal agencies' use of SARs, and (4) the effectiveness of the process used to revise SAR forms. GAO reviewed laws and agency documents; analyzed SAR filings; and interviewed representatives from the Financial Crimes Enforcement Network (FinCEN), law enforcement agencies, bank regulators, and depository institutions. In 2000 through 2007, SAR filings by depository institutions increased from about 163,000 to 649,000 per year; representatives from federal regulators, law enforcement, and depository institutions with whom GAO spoke attributed the increase mainly to two factors. First, automated monitoring systems can flag multiple indicators of suspicious activities and identify significantly more unusual activity than manual monitoring. Second, several public enforcement actions against a few depository institutions prompted other institutions to look more closely at client and account activities. Other factors include institutions' greater awareness of and training on BSA requirements after September 11, and more regulator guidance for BSA examinations. FinCEN and law enforcement agencies have taken actions to improve the quality of SAR filings and educate filers about their usefulness. Since 2000, FinCEN has issued written products with the purpose of making SAR filings more useful to law enforcement. FinCEN and federal law enforcement agency representatives regularly participate in outreach on BSA/anti-money laundering, including events focused on SARs. Law enforcement agency representatives said they also establish relationships with depository institutions to communicate with staff about crafting useful SAR narratives. FinCEN, law enforcement agencies, and financial regulators use SARs in investigations and financial institution examinations and have taken steps in recent years to make better use of them. FinCEN uses SARs to provide public and nonpublic analytical products to law enforcement agencies and depository institution regulators. Some federal law enforcement agencies have facilitated complex analyses by using SAR data with their own data sets. Federal, state, and local law enforcement agencies collaborate to review and start investigations based on SARs filed in their areas. Regulators use SARs in their examination process to assess compliance and take action against abuse by depository institution insiders. After revising a SAR form in 2006 that still cannot be used because of information technology limitations, in 2008, FinCEN developed a new process for revising BSA forms, including SARs, that may increase collaboration with some stakeholders, including some law enforcement groups concerned that certain of the 2006 revisions could be detrimental to investigations. However, the limited documentation on the process does not provide details to determine the degree to which the new process will incorporate GAOidentified best practices for enhancing and sustaining federal agency collaboration. For example, it does not specify roles and responsibilities for stakeholders or depict monitoring, evaluating, and reporting mechanisms. By incorporating some of these key collaboration practices and more fully developing and documenting its new process for form revisions, FinCEN could achieve some potential benefits that could come from closer adherence to the practices--such as greater consensus from all stakeholders on proposed SAR form revisions. |
The United States has made commitments related to its procurement market under the WTO’s GPA, in its various forms, and through FTAs negotiated with other countries. In both cases, suppliers compete through a procurement process that follows parameters agreed upon by the parties to the trade agreements. USTR negotiates these agreements and monitors and enforces foreign government compliance once they come into force. Officials in the Department of Commerce’s (Commerce) International Trade Administration monitor compliance as well as support USTR’s activities and provide services to promote U.S. exports. The GPA is a plurilateral agreement within the framework of the WTO that, according to the WTO, aims to mutually open government procurement markets among its parties, covering government purchasing of goods, services, and construction work. The current GPA (referred to as the revised GPA) entered into force on April 6, 2014, 20 years after a previous agreement (referred to as the 1994 GPA) was signed on April 15, 1994. The two agreements currently co-exist because Switzerland is still in the process of adopting the revised GPA. All other parties to the 1994 GPA have adopted the revised GPA. According to WTO documents, the revised GPA is a result of parties negotiating refinements to the 1994 agreement. The WTO has stated that these include updating the agreement’s text and expanding market access commitments. The revised GPA consists of 18 parties (including the EU) and covers 46 WTO members, including the 28 EU member states (countries). Another 28 WTO members are observers. Of these, 8 members are in the process of acceding to the agreement. The United States has FTAs with 20 countries, 4 of which (Canada, Israel, Singapore, and South Korea) are also parties to the GPA. Unlike the GPA, which encompasses dozens of parties, most U.S. FTAs are bilateral. Almost all of the FTAs that the United States has in force include provisions covering government procurement, and most contain a separate government procurement chapter that, like the GPA, contains market access commitments that include coverage schedules and threshold amounts for procurement activities to which the agreement applies. According to USTR, to implement U.S. obligations under the international agreements that cover government procurement, the United States waives preferential purchasing requirements that would otherwise be inconsistent with the international agreement. For example, for contracts covered by the GPA and U.S. FTAs, USTR has waived the Buy American Act and other preferential provisions for eligible products. Our review of government procurement commitments in agreements between the United States and selected trading partners found that, in general, the text in these agreements contained similar provisions. The revised GPA, the Colombia-FTA, and the Australia-FTA were negotiated concurrently and generally maintain many of the commitments in the 1994 GPA, NAFTA, and the South Korea-FTA while adding explicit language related to nondiscrimination of other parties’ suppliers, promotion for the environment, and shorter timelines. We found nine common characteristics across the agreements. We also found that some differences exist, for example because later agreements have reflected new technology. USTR negotiated all of the agreements on behalf of the United States, and negotiation timelines often overlapped (see fig. 1). A former trade agreement negotiator told us that the revised GPA reflected an “evolution” from the 1994 GPA, because negotiators focused on streamlining the language and addressing parties’ proposed changes. USTR officials stated that recent FTAs, including the Colombia-FTA and the South Korea-FTA, reflect the changes incorporated into the revised GPA. We found that later agreements reflect modernized technology. For instance, the 1994 GPA and NAFTA state that tenders (bids) shall normally be submitted in writing directly or by mail while also including provisions regarding the submission of tenders using telex, where permitted. Later agreements, such as the revised GPA and the Australia-FTA, do not specify a common method for submitting tenders and have provisions recognizing the use of electronic means in conducting procurement. Additionally, the 1994 GPA, NAFTA, and the revised GPA all require parties to regularly submit procurement statistics. The revised GPA gives parties the option to publish records to an official website. We found that the government procurement agreements we reviewed generally follow the same structure and contain nine common elements that we identified; however, they sometimes differ in their specific commitments. The nine common elements we distilled from our analysis of the text include provisions that relate to transparency, nondiscrimination, defining scope and coverage, exceptions, procurement procedures, criteria for procurement decisions, supplier challenges, ethical standards, and changes and further improvements. The method by which procurement is conducted, such as the processes by which procurement is announced, information is released, and a winning bidder is selected, is contained in many similar provisions across the six agreements we reviewed. However, there are some notable differences, in areas such as statistical monitoring, environmental protection, and the supplier challenge process. U.S. trade officials told us that specific differences usually reflect variations in the trading partners’ domestic priorities. Nevertheless, USTR officials told us that differences in the text of provisions in the agreements may not actually result in parties having different obligations with regard to their procurement practices. Furthermore, USTR stated that parties sometimes take actions not explicitly mentioned in the agreements, but not prohibited either. For example, USTR noted that suppliers that have been convicted of criminal behavior may be excluded from bidding on contracts. USTR further added that later agreements may add language specifically allowing parties to take such actions. For example, the revised GPA and the South Korea- FTA explicitly mention criminal convictions as grounds for excluding a supplier. A common feature of these agreements is the inclusion of provisions related to transparency in the procurement process. For the GPAs, this reflects one of the six principles that the WTO states are the foundation of the plurilateral trading system, an aim to make trade “predictable and transparent.” Parties to these selected agreements commit to publishing information on their laws, regulations, judicial decisions, and specific administrative rulings regarding procurement covered by the agreement. Furthermore, government entities must also release information at various steps of the procurement process, and the requirements for releasing information have similarities. Under the selected agreements, should a government entity wish to undertake procurement under open tendering procedures, generally it must publish an invitation for suppliers to submit tenders. Once an award has been made, parties must also publish information on the contract. The Colombia-FTA and the Australia- FTA require this information to be published within 60 days of the award, while the remaining four agreements allow 72 days for publication. Three of the agreements (the 1994 GPA, the revised GPA, and NAFTA), explicitly require parties to collect procurement statistics. In the two GPAs, parties are to provide the statistics to the committee on a regular basis. In NAFTA, parties are to provide the statistics to the other parties in the agreement on a regular basis. The procurement statistics required to be reported by governments vary among the three agreements. Both the 1994 GPA and the revised GPA require parties to submit statistics on contracts awarded by all entities covered by the agreement. The 1994 GPA provisions relating to such reporting specify when the report is required to contain statistics for both above-and below-threshold-value contracts and when statistics for only above-threshold-value contracts are required. The revised GPA reporting provisions specify that the report is to contain statistics on all contracts covered by the agreement. As a result, procurement that is above the thresholds set in the agreement is required to be reported. The revised GPA does not speak to the reporting of below-threshold-value contracts. The timing of the statistical reports differs as well. The 1994 GPA requires statistics to be reported “on an annual basis” but does not provide a specific date on which the annual basis is to begin. The revised GPA also requires that reports cover 1 year and specifically that they be submitted within 2 years of the end of the reporting period. Similar to the 1994 GPA, NAFTA specifically requires estimates of above-and-below threshold procurement, and statistics on the number and value of contracts awarded above the applicable threshold value, to be presented in annual reports. However, unlike the 1994 and revised GPAs, NAFTA’s provision on the collection of statistics states that statistics must be provided “unless the Parties otherwise agree.” Each of the agreements includes commitments that parties not discriminate against each other’s suppliers. For the GPAs, this reflects another of the six fundamental WTO principles, “nondiscrimination.” The text of each of these agreements contains versions of the following provision, taken here from the 1994 GPA: “With respect to all laws, regulations, procedures and practices regarding government procurement covered by this Agreement, each Party shall provide immediately and unconditionally to the products, services and suppliers of other Parties offering products or services of the Parties, treatment no less favourable than that accorded to domestic products, services and suppliers; and that accorded to products, services and suppliers of any other Party.” While there are some differences, this type of provision in each of the selected agreements speaks to a party treating the suppliers and goods of another party no less favorably than it treats domestic suppliers and goods. Additionally, the agreements contain other provisions that are consistent with this commitment. For example, the agreements all have a provision on determining rules of origin (criteria for deciding the national source of a product) that generally expresses that a party shall apply the rules of origin that it applies in the normal course of trade. Each of the agreements we examined contains a section on scope and coverage that defines what is covered by the agreement. All of the agreements generally state that they apply to any measure regarding covered procurement. All of the agreements, except the 1994 GPA, provide some detail as to what transactions to which the government procurement provisions will not apply, such as the acquisition of fiscal agency or depository services, and liquidation and management services for regulated financial institutions. The agreements we examined all contain provisions allowing certain exceptions to the commitments made in the agreement. For example, each of the agreements allows for the use of limited tendering. Limited tendering allows an entity to contact one supplier directly rather than to utilize open or selective procedures. Nevertheless, this procurement is still otherwise covered under the agreements, according to USTR officials. Each agreement contains lists of circumstances under which government entities may use limited tendering, with similarities across the agreements. For instance, if the covered procurement is for products that are purchased on a commodity market, or if a government entity has not received any tenders that conform to the essential requirements on a notice of intended procurement, a government entity may opt to use limited tendering procedures. Other examples include procurements that can only be supplied by one particular supplier and where no reasonable alternative or substitute exists because the goods or services are works of art or products protected by patents. Each of the agreements includes provisions detailing other exceptions to the agreement. According to a former trade agreement negotiator, these exceptions, also known as derogations, identify procurement not covered by the agreement. In general, these provisions state that nothing in the agreement shall prevent parties from taking measures necessary to protect public morals, order, or safety; necessary to protect human, animal, or plant life or health; necessary to protect intellectual property; relating to goods or services of handicapped persons; relating to goods or services of persons with philanthropic institutions; and relating to goods or services of prison labor. However, each agreement also contains a condition that such measures not be applied in a manner that would constitute a disguised restriction on trade or a means of arbitrary or unjustifiable discrimination between the parties where the same conditions prevail. While all of the selected agreements contain the exceptions listed above, some agreements contain additional language in their provisions on exceptions, resulting in differences in the text. First, all of the agreements except the Australia- FTA and the Colombia-FTA specifically state that nothing in the agreement shall prevent any party from taking any action or not disclosing any information that it considers necessary for the protection of its essential security interests relating to the procurement of arms, ammunition or war materials, or to procurement indispensable for national security or for national defense purposes. Second, those two agreements extend the exception related to measures necessary to protect human, animal, or plant life or health to include environmental measures necessary for those protections. Third, the Australia-FTA specifically includes “not-for- profit institutions” in its exception related to measures relating to goods and services of persons with philanthropic institutions. The agreements we examined contain similarities in the procedures for how procurement is conducted and documented. First, a government entity proposes a procurement. Under the selected agreements, unless other procedures apply, for covered procurements, entities must publish a notice. In general, these notices are required to include information such as a description of the procurement and a final date for receiving tenders. These notices often include technical specifications describing exactly what the government entity anticipates procuring. The agreements prohibit the parties from adopting or applying technical specifications that would create unnecessary obstacles to trade. For example, the agreements generally prohibit technical specifications that require particular trademarks or country-specific details. The revised GPA, the Colombia-FTA, the South Korea-FTA and the Australia-FTA add that these rules for technical specifications are not intended to prevent parties from adopting technical specifications for the conservation of natural resources or the promotion of the environment. If a supplier believes it can meet all of the listed requirements and provide the desired goods or services, it may submit a tender. The agreements vary in the amount of time the parties must allow suppliers to submit tenders once the notice has been released. USTR officials told us that because online procurement systems were more efficient than mailing or faxing tenders, deadlines for tender submission could be shorter. Subject to exceptions, the 1994 GPA, NAFTA, the revised GPA, the South Korea- FTA, and the Colombia-FTA require parties to establish a final submission date for tenders in an open procurement of no less than 40 days from the publication of the notice, and the Australia-FTA requires at least 30 days. One such example of an exception is in the revised GPA, which specifies that if parties incorporate electronic means into the procurement process, they may reduce the deadline by 5 days for each of the three electronic means prescribed. As a result, parties to the revised GPA are capable of reducing this deadline to 25 days. Another example can be found in the Colombia-FTA, which allows parties to reduce the general time limit for the submission of tenders to 30 days if the notice of intended procurement is published in an electronic medium and tender documentation is concurrently provided to prospective suppliers electronically. Once a supplier decides to submit a tender, the agreements each generally require government entities to treat all tenders impartially upon receipt. The agreements set out specific requirements that parties must follow when awarding covered contracts. Generally, to be considered for an award, a tender must have met all of the essential requirements listed in the notice and be from a supplier that complies with the conditions for participation. Furthermore, the procuring entity is then generally required to award the contract to the supplier that is fully capable of undertaking the contract and whose tender meets criteria such as being the lowest cost or the most advantageous. However, a procuring entity need not follow those requirements if the entity decides that it is “not in the public interest” to award the contract. During the procurement process, procuring governments entities may make decisions that suppliers, including foreign companies, feel are not in compliance with the provisions of one or more of the agreements. All of the agreements we examined outline procedures for resolving challenges raised by suppliers regarding procuring government entities’ implementation of the procurement process, granting suppliers a mechanism by which to resolve these concerns. Most of the agreements require parties to encourage suppliers to attempt to resolve such concerns through consultations with the procuring entity before entering into the formal supplier challenge process. While the specifics vary among the agreements, the parties generally must designate an impartial authority to review supplier challenges. Suppliers must be allowed at least 10 days from the time when the basis of the complaint became known or reasonably should have become known to the supplier to submit a challenge about an awarded contract. Furthermore, though all of the agreements have requirements related to the issuance of a decision by the designated reviewing authority, there are variations in the requirements. Generally, the reviewing authority is also allowed to issue interim measures, which may include suspending the contract award. According to Commerce officials, those interim measures are intended to give the review body time to review the complaint and issue a decision. The revised GPA, the Australia-FTA, and the Colombia-FTA contain a specific provision prohibiting governments from discriminating against a supplier that has an outstanding dispute case on another contract. Additionally, there may be occasions when a party believes its benefits under the agreement have been nullified or impaired leading to a dispute between member countries. Three GPA related complaints have been brought to the WTO’s dispute settlement mechanism. NAFTA and both the 1994 GPA and the revised GPA contain additional, specific procedures for resolving government-to-government procurement disputes. All of the FTAs we reviewed also have government-to- government dispute settlement procedures outside of the government procurement chapter, which apply to the whole agreement, including the government procurement provisions. According to the WTO’s based researchers, one benefit of government procurement agreements is that they can guide countries toward increased integrity and good governance. Each of the agreements we examined includes provisions relating to ethical standards, such as stating that procuring entities may exclude suppliers from the procurement on grounds such as bankruptcy or false declarations. The list of reasons allowing for exclusion of suppliers in each agreement is not exhaustive, and USTR officials told us that parties may exclude suppliers at their own discretion. For instance, the revised GPA, the Australia-FTA, and the South Korea-FTA mention excluding suppliers for significant deficiencies in performance of any substantive requirement or obligation under a prior contract. In addition to these grounds for exclusion, the revised GPA and the South Korea-FTA also cite as grounds for exclusion final judgements in respect of serious crimes or other serious offenses, or failure to pay taxes. The revised GPA also mentions excluding suppliers that have committed “professional misconduct or acts or omissions that adversely reflect on the commercial integrity of the supplier.” According to the WTO, a key goal of the revised GPA was to include additional language to fight corruption. Some agreements contain explicit anticorruption language. For example, the revised GPA states that a procuring entity shall conduct covered procurement in a transparent and impartial manner that prevents corrupt practices. Moreover, the Colombia-FTA requires parties to establish procedures to declare suppliers that have engaged in fraudulent or other illegal actions in relation to procurement ineligible for a party’s procurement. Each international government procurement agreement contains a coverage schedule determining which entities’ procurement falls under the scope of the agreement. The WTO states that the coverage schedule “plays a critical role” in determining whether a procurement is covered. Each of the agreements we examined also includes a mechanism for parties to rectify or modify these market access schedules, and all of the agreements lay out procedures for such instances. For example, under these procedures in the Australia-FTA, parties may make minor revisions to their market access schedule as long as they provide notification and that no other party objects. Some agreements also expressly outline a framework for future negotiations. The 1994 GPA stated that parties must undertake further negotiations within 3 years of the date of entry into force of the agreement, while NAFTA stated that negotiations shall resume on December 31, 1998. Our analysis found that the revised GPA provides generally more comprehensive market access coverage of central (e.g., federal), sub central (e.g., state), and other government entities (e.g., authorities) than the 1994 GPA and the FTAs we reviewed. GPA and FTA parties do not open their entire procurement markets to foreign competition. Instead, the agreements have coverage schedules usually contained in several annexes, which define the party’s market access commitments. These market access commitments identify the procuring entities covered by the agreements at the central and subcentral government levels and, in some agreements, by what is termed “other entities,” such as utilities. The agreements also identify the goods, services, and construction services covered and exclusions or exceptions are noted by general category or by entity. According to USTR officials, all parties in the agreements we reviewed, including the United States, have certain procurements that they deem sensitive and do not want to open to foreign suppliers, including for social, policy, or national security reasons. These officials stated that, for example, the United States specifies exclusions that include set-asides for small or minority businesses, and trading partners often exclude defense, agriculture, military support, and motor vehicles from their market access commitments. Moreover, the agreements’ coverage of procurements is delineated by threshold values below which procurement activities are not covered and foreign access in accordance with the procedures in the agreement is not guaranteed. We found that the United States increased the number of its covered central government entities from 75 under the 1994 GPA to 85 under the revised GPA. (See table 1.) The central government entity coverage schedule of the United States for both the 1994 and the revised GPA includes 15 executive branch departments, such as, the Department of Education, the Department of Justice, and other federal entities, for example, the Farm Credit Administration and the Small Business Administration. Eleven federal entities were added to the revised GPA central government schedule, and one was dropped. The U.S,’s coverage schedule does not list judicial or legislative branch entities in any of the agreements we examined. We found that the U.S.’s schedules cover fewer central government entities in the FTAs we reviewed than in the revised GPA. While the U.S.’s coverage schedules in the two GPAs and the four FTAs we reviewed include the same 15 executive branch departments, the number of other central government entities covered varies. Over time, the total number of central government entities covered by FTAs increased from 53 under NAFTA to 79 under the South Korea-FTA; however, the FTAs include fewer central government entities than the revised GPA. For example, the four FTAs do not cover 11 central government entities covered by the U.S.’s central government schedule in the revised GPA. As for central government entity coverage for the U.S.’s trading partners in the agreements we reviewed, the top five GPA parties with the largest procurement markets (the EU, South Korea, Canada, Norway, and Japan) can cover executive, judicial, and legislative central government entities in their coverage schedules for the revised GPA. For example, Japan covers 25 central government entities, including ministries, agencies, the legislature, the cabinet, and the Supreme Court. Norway covers all of its existing central government entities and, according to a former trade agreement negotiator, stipulates that it will cover central government entities created in the future. The EU covers EU entities and, to the extent stipulated, the central government contracting authorities of EU member states. As compared to its coverage schedule in the 1994 GPA, South Korea added nine new central government entities to the revised GPA, including the Defense Acquisition Program Administration, the Korea Communications Commission, and the Fair Trade Commission. Similarly, other top five U.S. FTA partner countries that are not parties to the GPA (Colombia and Australia) also cover executive, judicial, and legislative government entities in their market access commitments. Similarly, the United States covers more subcentral level procurement in either GPA than in any of the individual FTAs we reviewed. The number of states covered in the U.S.’s subcentral level commitments has not increased over time, as both the 1994 GPA and the revised GPA cover 37 states. (See fig. 2.) Included are California, New York, and Texas, the three states with the largest procurement markets. The GPA also includes seven states not covered under any FTA we reviewed. Across the FTAs we reviewed, the United States only includes subcentral government entities in the Colombia-FTA and the Australia-FTA. The United States does not list any state in its coverage schedule in NAFTA or the South Korea-FTA. Nevertheless, Canadian and South Korean suppliers are able to participate in state-level procurement covered under the revised GPA in the same manner as domestic suppliers because those countries are also GPA members. The Colombia-FTA covers eight states that are also included in the revised GPA, as well as Puerto Rico. The Australia-FTA covers 30 states that are also included in the GPA, as well as Georgia; this is the only agreement that we reviewed that covers Georgian procurement. States whose procurement is not covered under any agreement we reviewed are Alabama, Alaska, Indiana, Nevada, New Jersey, New Mexico, North Carolina, North Dakota, Ohio, South Carolina, Virginia, and West Virginia. According to USTR officials, they must obtain state government authorization to cover procurement in a trade agreement on a state-by-state basis, and each state independently determines whether to have its procurement covered under a trade agreement. Officials noted that despite the flexibilities that states have in determining the scope of procurement covered, such as being able to limit coverage to procurement by specified agencies and to exclude purchases of sensitive goods or services, U.S. state participation has not increased over time. Other countries give U.S. suppliers access to their subcentral government procurement as well. Among the top five GPA parties, there has been an expansion in the number of subcentral government entities covered by the agreement over time between the 1994 and the revised GPA. Canada added a territory to its subcentral government entity coverage schedule in the revised GPA and excluded procurement by the entities listed in the subcentral coverage schedule from Iceland and the Principality of Liechtenstein. Japan added 7 designated cities; it covers all 47 prefectures and 19 designated cities. South Korea increased the number of subcentral government entities covered by adding to its coverage schedule the Ulsan Metropolitan City, and the local government entities in three metropolitan cities: Seoul (25 local governments), Busan (16 local governments), and Incheon (10 local governments). As stated previously, of the FTAs we reviewed, only the Australia-FTA and the Colombia-FTA include subcentral government entities’ coverage schedules. In these agreements, Australia includes its six states and two territories, and Colombia includes its 32 Gobernación del Departamento entities in the subcentral government procurement market access commitments. In addition to central and subcentral government entities, parties to procurement agreements also cover other government entities. According to USTR officials, “other government entities” are not defined and, as a result there is a wide variation among parties to the agreements as to the types of entities covered. Each party’s coverage is the result of a variety of factors, including the structure and organization of its government and its market access negotiations with other parties. Under the GPA, some parties define their coverage of other government entities by listing specific entities that will be covered in the annex. Of the five GPA countries we reviewed, the United States, Canada, Japan, and South Korea follow this model. The U.S.’ coverage includes electric utilities and ports/port authorities in the agreements we reviewed. Canada’s coverage includes federal Crown corporations, such as some museums and a railway corporation; Japan’s coverage includes banks, centers, corporations, councils, foundations, funds, institutions, and museums; and South Korea’s coverage includes entities such as the Korea Trade-Investment Promotion Agency, the Korea Rail Network Authority, and the Korea Tourism Organization, in addition to a number of banks and corporations. On the other hand, the EU and Norway define their commitments with respect to other government entities by listing activities in specific sectors undertaken by certain classifications of entities rather than by naming specific entities. These sectors include drinking water, electricity, airports, and maritime or inland ports. Under the Australia-FTA and the Colombia-FTA, coverage is limited to only those other government entities listed. Similar to GPA trading partners, these FTA partners also include a variety of entities, such as industry, agencies, and commissions. Canada lists the same other government entities in NAFTA and in the GPA. The United States covers the same 10 other government entities under the 1994 GPA and in the revised GPA; while under the FTAs we reviewed, the United States only covers some of these entities. (See table 2.) According to officials, the coverage of goods, services, and construction services in international procurement agreements is based on the principle that all procurement above thresholds and by covered entities for these items is covered, unless explicitly excluded. Officials noted that all parties have certain procurements that they deem sensitive and do not want to open to foreign suppliers, for example for social, policy, or national security reasons. Officials also stated that because market access commitments are negotiated on a reciprocal basis, some parties also exclude some types of procurement only from another party, for example when they believe they are not receiving enough market access of that particular type. According to the WTO, reciprocity is the foundation of coverage commitments generally. In terms of U.S. and trading partners’ exclusions, we found commonalities across the agreements we reviewed. Provisions describing the exclusion of specified goods, services, and construction services can be found in the general notes sections in the agreements and often in a specific annex to the agreements, at an entity level. In particular, exclusions of specified services are generally identified on a positive or negative list. According to a former trade agreement negotiator, most parties use a positive list; that is, only the services listed are covered by the agreement. Under the GPA, Canada, the EU, Japan, Norway, and South Korea use a positive list. The United States uses a negative list approach — unless listed, any procurement by a covered entity of a service meeting the procurement threshold is covered by the agreement. According to Commerce officials, a negative list approach provides more liberal coverage than a positive list approach because it is not an exhaustive list and allows for coverage of new services. The United States specifies exclusions in the six agreements we reviewed, and these exclusions apply to the procurements of all entities covered by the agreement, except as specified otherwise. The exclusions include “set-asides” for small or minority businesses. For example, U.S. laws promoting the economic development of small businesses, as defined by the U.S. Small Business Administration, reserve certain contracts for these types of firms, according to a former trade agreement negotiator. Similar to the United States, Canada also excludes set-asides for small and minority businesses. Commerce officials noted that the Australia-FTA, the Colombia-FTA, and the South Korea-FTA all exclude purchases for the direct purpose of providing international assistance from the scope of the government procurement commitments. Parties also define specific exclusions of goods, services, and construction services at the entity level. U.S. exclusions of procurements by central government entities are generally similar across the six agreements we reviewed. (See table 3.) Trading partners also set exclusions at the central entity level. For example, Colombia’s exclusions include the procurement of agricultural raw materials or inputs related to agricultural support programs and food assistance by the Ministero de Agricultura y Desarrollo Rural; Australia’s exclusions include procurement of ship and marine equipment by the Department of Defence and the Defence Materiel Organisation; and South Korea’s exclusions include certain procurements of agricultural, fishery, and livestock products by covered central government entities. The EU coverage schedule also contains specific exclusions regarding the United States under the GPA. According to a former trade agreement negotiator, under the GPA, the EU withholds about 200 central government entities of member states from the United States and not from other countries because, as noted previously, the United States does not cover all federal agencies and does not cover legislative or judicial entities. At the subcentral level, U.S. states covered in the GPA, the Australia- FTA, and the Colombia-FTA include exclusions to their procurement. As noted previously, the other FTA agreements do not cover U.S. states’ procurement. First, these agreements include four exclusions that apply to all state entities outlined in the coverage schedules. Provisions in the agreements do not apply to preferences or restrictions associated with programs promoting the development of distressed areas or businesses owned by minorities, disabled veterans, and women; any procurement by a covered state entity on behalf of a noncovered entity at a different level of government; restrictions attached to federal funds for mass transit and highway transit projects that are undertaken with federal funds; and procurement of services excluded by the United States, as well as printing services. In addition, under the subcentral level procurement covered under the GPA, the Australia-FTA, and the Colombia-FTA, a number of U.S. states have state-specific exclusions related to certain types of procurement, some of which are the same across these three agreements. (See table 4.) In addition, in procurements by the Texas Building and Procurement Commission, Texas excludes preferences for motor vehicles, travel agencies located in the state, and rubberized asphalt paving made from scrap tire by a Texas facility in the Colombia-FTA and the Australia-FTA but not the GPA. Georgia excludes the procurement of beef, compost, and mulch by the Department of Administrative Services and the Georgia Technology Authority in the Australia-FTA. Of the FTAs we reviewed, only the Australia-FTA and the Colombia-FTA have subcentral government entity coverage schedules. In the Australia- FTA, at the subcentral government entity level, exclusions include health and welfare services, education services, and motor vehicles for procurement by certain entities. Colombia’s exclusions include food, agricultural raw materials/inputs, and live animals related to agricultural support programs and food assistance. GPA trading partners exclude specific procurement by listed sub central government entities of suppliers, services, and service providers from the United States. For example, the EU excludes the procurement of air traffic control equipment from suppliers and service providers from the United States. International procurement agreements we reviewed specify threshold values above which procurement activity is covered. Of the revised GPA coverage schedules that we reviewed, many parties apply similar threshold levels. According to officials, the lower the threshold values, the more access foreign suppliers have to the procurement market. Officials also noted that conversely, foreign suppliers do not benefit from the agreements’ coverage with regard to relatively smaller procurements. For the GPA, threshold values are expressed as special drawing rights (SDR) in coverage schedules delineating covered entities. We found that among the top five WTO GPA procurement markets (the EU, Japan, Canada, South Korea, and Norway) we reviewed, parties set the same threshold levels as at least one other party, with a few exceptions. (See table 5.) In the revised GPA, among the central government entities’ coverage schedules we reviewed, the threshold level for goods and services is 130,000 SDRs (approximately $182,000), and for construction services it is 5 million SDRs (approximately $7 million) for all parties except for Japan. For subcentral government entities’ coverage schedules, four parties set the threshold level for goods and services at 200,000 SDRs (approximately $279,000), and two parties set the threshold level at 355,000 SDRs (approximately $496,000). For the procurement of construction services by subcentral government entities, most of the parties set the threshold level at 5 million SDRs (approximately $7 million). For other entities’ coverage schedule, most of the parties set the threshold level for goods and services at 400,000 SDRs (approximately $559,000) and for construction services at 5 million SDRs (approximately $7 million). For the United States, for example, the 2016-17 threshold value is set at $191,000 for procurement of goods and services and $7.4 million for procurement of construction services for covered central government entities in the revised GPA. We also found that for the most part, threshold levels for these countries did not change between the 1994 GPA and the revised GPA. For example, only Japan lowered its threshold level for the procurement of goods and certain services by central government entities; changing the threshold level from 130,000 SDRs (approximately $182,000) to 100,000 SDRs (approximately $140,000). Additionally, South Korea lowered its threshold level for the procurement of goods by entities listed in its other entities’ coverage schedule from 450,000 SDRs (approximately $628,000) to 400,000 SDRs (approximately $559,000). For the United States, current threshold levels set in the FTAs we reviewed differ from those set in the GPA. Under NAFTA, the Australia- FTA, and the Colombia-FTA, the United States sets a lower threshold of approximately $78,000 for the procurement of goods and services by central government entities. Under NAFTA, the U.S. threshold level for the procurement of construction services by entities listed in the federal government entities’ coverage schedule and the government enterprises’ coverage schedule is set at above $10,000,000, a higher threshold than the one set for construction services in the GPA. We provided a draft of this report to the USTR and the Department of Commerce for comment. Both agencies provided technical comments which are incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the USTR, the Secretary of the Department of Commerce, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612 or GianopoulosK@gao.gov. Contact points for our Offices of Congressional Relations and of Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. To review government procurement commitments, we determined which characteristics these agreements share as well as how they differ. We focused on the five largest procurement markets of parties to the World Trade Organization’s (WTO) Agreement on Government Procurement (GPA) and the five largest procurement markets among active U.S. free trade agreement (FTA) partners since 1994. This scope encompasses six agreements: the 1994 GPA, the revised GPA, and four U.S. FTAs (the North American Free Trade Agreement , the South Korea-FTA, the Colombia-FTA, and the Australia-FTA). We reviewed agreements that entered into force from 1994 on, beginning with NAFTA. We included the GPAs in our scope because they are multilateral international procurement agreements that cover the most parties and the largest markets. In addition, the FTAs we reviewed correspond to the five largest procurement markets among active U.S. FTAs in our scope. We reviewed 4 out of the 14 active U.S. FTAs. As we reviewed the commitments of the five largest procurement markets covered by the GPA and the five largest procurement markets among active U.S. FTA trade partners since 1994, our findings may not be applicable to all the U.S.’s FTA agreements. The WTO’s GPA and the government procurement chapters of FTAs are very long, complex documents, making direct textual comparisons challenging without an analytical framework. To compare the text, we created a framework to isolate sections of the text to make direct comparison possible and reliable. First, we had four analysts independently read the text of two of the six agreements to identify overarching themes; broad categories such as “transparency”; and subthemes, specific provisions such as “statistical reporting” found in these agreements. The analysts were instructed to reference the WTO literature, including WTO’s statement of general principles, to help reflect the negotiating parties’ main priorities as presented by the WTO. When all four analysts had finished compiling their lists independently, they met and compared them. The four lists shared broad similarities across themes and subthemes, and through discussion they were combined into one draft list. This draft list was shared with officials from the Office of the United States Trade Representative (USTR) and the Department of Commerce (Commerce), as well as with a former trade official and internally with GAO general counsel staff and a methodologist to ensure consistency. After their feedback was incorporated, the final list contained nine themes and 31 subthemes. Using this final list as a framework, two analysts independently read through the text of all six agreements and sorted pieces of text that corresponded to each subtheme. The pieces of text could be as short as one sentence or as long as multiple pages and were sorted according to the content of the text rather than chapter or section headers. The two analysts independently filled out a spreadsheet, using separate copies of the agreements for referencing to ensure that the integrity of the blind review was not compromised. When they had completed their review, they compared their results and combined them into a master spreadsheet, resolving differences through discussion. For the few instances when agreement could not be reached, the other two analysts who had participated in compiling the draft list of themes were consulted to make a decision. The master spreadsheet was then reviewed by GAO general counsel staff. The master spreadsheet served as a guide and a framework to describe and compare the text and identify similarities and differences across the agreements. This master spreadsheet also assisted us in finding the textual examples contained in the report. In addition to reviewing the text of the agreements, we also analyzed and compared the market access-related commitments (lists of covered entities and excluded procurements) made in the annexes of these agreements. We identified similarities and differences in coverage and exclusions across these agreements. Specifically, we compared the U.S.’ market access commitments in the 1994 GPA to the revised GPA and to the FTAs in our scope. We also identified market access commitments of selected U.S. trading partners in these agreements. To verify our findings, we reviewed WTO and USTR documentation. In addition, we interviewed USTR and Commerce officials to discuss these similarities and differences in coverage and exclusions. We conducted this performance audit from August 2015 to September 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions, based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Kimberly M. Gianopoulos, (202) 512-8612, or GianopoulosK@gao.gov. In addition to the contact named above, Adam R. Cowles, Assistant Director; Marisela Perez, Analyst-in-Charge; Chris Cronin, and Grace P. Lui made major contributions to this report. Also, Martin de Alteriis, Gergana T. Danailova-Trainor, Karen Deans, and Neil Doherty provided assistance. | The United States and other countries have made commitments under the WTO's GPA and FTAs that open their government procurement to foreign suppliers. Under these commitments, parties agree to a procedural framework for government procurement with provisions on issues such as transparency and nondiscrimination. These commitments have potentially opened an estimated $4.4 trillion government procurement market to international firms, providing numerous new opportunities for American businesses in foreign markets and for foreign businesses to compete for U.S. government contracts. As part of your larger request for information on U.S. participation in international procurement agreements, GAO reviewed commitments made by the United States and trading partners in selected international procurement agreements. This report provides information on (1) the provisions and (2) the market access schedules of the selected international procurement agreements. GAO reviewed the provisions and market access schedules across six agreements involving the largest government procurement markets to identify common features and variations. The agreements are the 1994 GPA and the 2014 revised GPA, NAFTA, the South Korea-FTA, the Colombia-FTA, and the Australia-FTA. GAO analyzed WTO and U.S. documents pertaining to the GPA and U.S. FTAs and interviewed USTR and the Department of Commerce agency officials in Washington, D.C. GAO found that the World Trade Organization's (WTO) Agreement on Government Procurement (GPA) and selected U.S. free trade agreements' (FTA) government procurement chapters that GAO reviewed generally have similarities in text, and commitments, potentially because parties negotiated multiple agreements concurrently (see fig.). Each of the agreements outlines the general method for conducting government procurement, including provisions relating to transparency, procurement procedures, and criteria for procurement decisions. However, differences exist, partially because later agreements reflect new technology. The 2014 revised GPA generally provides more comprehensive market access than the selected FTAs GAO reviewed. Partners define the degree to which they will open their procurement markets to suppliers from other countries, known as their market access commitments. These commitments outline the entities covered by the agreements, for example, at the central and subcentral government levels (for the United States, these include agencies of the federal government and states), and for what some agreements term “other entities” (which, for the United States, includes utilities). The United States covers 85 central government entities under the revised GPA, but only 53 entities under the North American Free Trade Agreement (NAFTA). Similarly, the United States covers 37 states under its GPA commitments and from no states to 30 in the FTAs GAO reviewed. While all the top five GPA parties GAO reviewed cover some subcentral government entities, Canada, Mexico, and South Korea do not have a subcentral government entity coverage schedule in their FTA commitments. According to the Office of the United States Trade Representative (USTR), parties have certain procurements that they deem sensitive and do not want to open to foreign suppliers, including for social or policy reason. In the agreements GAO reviewed, the U.S.'s trading partners often exclude agriculture, military support, and motor vehicles from their market access commitments. GAO is not making recommendations in this report. |
Over the last few decades, various political and economic factors have contributed to the advancement of commercial relations between the United States and China. Trade expanded rapidly after the United States and China signed a bilateral trade agreement in 1979. Total U.S.-China trade increased from about $8 billion in 1985 to $20 billion in 1990 and to $57 billion in 1995, according to the Census Bureau. The economic reforms and open investment policies that China initiated around the same time led to a surge in demand for foreign goods and services to modernize the economy, from infrastructure to industries. China’s rapid economic growth, with its real gross domestic product growing at an average annual rate of about 9.5 percent from 1980 to 2000, generated demand for raw materials and basic commodities, such as steel, iron, and cotton. Economic growth also enhanced the purchasing power of Chinese citizens, especially those living in urban areas. This created a relatively large middle class with the ability to buy foreign consumer goods and services. In addition, China joined the WTO in December 2001, making it subject to the multilateral organization’s trade liberalizing requirements. While U.S. exports to China rose rapidly after 1980, U.S. imports from China grew at an even faster pace, creating a large bilateral trade deficit that continues to increase today. The deficit in goods ballooned from $6 million in 1985 to $34 billion in 1995 and to $162 billion in 2004, according to the Census Bureau. This U.S. trade deficit with China in 2004 accounted for almost 25 percent of the overall U.S. trade deficit. The U.S.-China trade imbalance has gained much political and media attention in recent years and has become a source of trade friction between the two countries. Some policy makers, industry leaders, and labor groups believe that this trade imbalance is costing United States jobs in industries trying to compete with imports from China. Further, some plicy makers believe that this trade imbalance may be due to China’s unfair trade practices or its failure to meet all of its WTO obligations. Some policy makers also believe that China’s currency is undervalued relative to the U.S. dollar, and thereby inhibits greater imports from the United States. The causes of the U.S. trade deficit with China, or with the rest of the world, while complex, are rooted in part in macroeconomic factors at home and abroad, such as national savings and investment decisions, growth levels, monetary and fiscal policies, changes in domestic and foreign prices, and exchange rates. For example, when a country’s budget deficit or domestic spending grows without increases in domestic savings, foreign capital inflows can rise, affecting exchange rates, leading to an increase in imports and deterioration in the trade balance. China, Japan, and Europe have been the largest foreign sources of these capital inflows to the United States in recent years. Some experts remain less concerned about trade deficits because the inflow of foreign capital allows a higher level of investment, benefiting the economy as a whole. China’s accession to the WTO in 2001, a complex process that took 15 years, resulted in commitments to further open and liberalize its economy and offer a more predictable environment for trade and foreign investment in accordance with WTO rules. In particular, China committed to gradually eliminate or lower tariffs and nontariff barriers on a broad range of goods and services. A subsequent survey of U.S. companies showed that they expected China’s membership to the WTO to have a positive impact on their business operations. While U.S. firms are progressively gaining greater market access in China, many issues remain to be solved, from opaque rules and regulations to intellectual property violations. Although still relatively small, China’s market for U.S. goods exports has become increasingly important. Export growth to China was widespread across 10 major GAO product categories and accelerated in recent years. In particular, raw materials and intermediate inputs for manufacturing, such as cotton for textiles and apparel, experienced the highest growth in recent years. China’s economic development and market liberalization have been major contributors to the growth in U.S. exports. Although still small relative to some other U.S. trading partners, China’s market for U.S. goods has increased in importance. China progressed from the 9th-largest market for U.S. goods exports in 1995 to the 5th-largest in 2004, after Canada, the EU, Mexico, and Japan (see fig. 1). In addition, U.S. goods exports to China exceeded those to any individual EU country. However, exports to China are still significantly smaller than exports to the United States’ largest trading partners, Canada and Mexico. In 2004, $33 billion, or 4 percent of total U.S. goods exports, went to China, compared with $164 billion and $93 billion to Canada and Mexico, respectively. On the other hand, China is a significant market, and in some cases even the largest market, for certain U.S. products—mainly raw materials for manufacturing, building, and agriculture. For example, more than 20 percent of U.S. exports of oil seeds, zinc, cotton products, and raw hides and skins and almost 13 percent of U.S. iron and steel exports were destined for China in 2004. Moreover, despite declines in recent years, China is still the largest market for U.S. fertilizer exports, according to an industry expert, and trade statistics show that fertilizer exports to China accounted for more than 35 percent of total U.S. fertilizer exports in some years. However, there are major exported products for which China remains a small U.S. market, including pharmaceutical products, with exports to China of less than 0.5 percent of total U.S. pharmaceutical exports, and vehicles, with less than 1 percent of U.S. vehicle exports in 2004. (See app. II, tables 8 and 9, for details.) Over the past decade, U.S. goods exports to China have grown much faster than overall U.S. goods exports. U.S. exports to China tripled in value and grew at an annual rate of 13 percent versus 2 percent annually for overall U.S. exports, from 1995 to 2004, adjusted for inflation (see fig. 2). In addition, over the same period, U.S. exports to China grew faster than U.S. exports to any other major U.S. trading partner. Growth rates for U.S. exports over the decade were 3 percent for Canada, 7 percent for Mexico, and negative 3 percent for Japan. As a result of the faster growth, China’s share of total U.S. exports more than doubled from 2 percent to over 4 percent from 1995 to 2004. While U.S. goods exports to China grew overall during the past decade, growth was also consistent at the more detailed product category and subcategory level. Over a 10-year period, all but 2 of GAO’s 10 categories of goods, Aircraft, Vehicles and Other Transportation and Miscellaneous and Special Provision Goods, had double-digit annual growth rates from 1995 to 2004, as shown in table 1. The category with the highest U.S. export value, $12.8 billion in 2004, was Machinery, Electronics, and High-Tech Apparatus, which grew at an annual rate of 15 percent over the period. Aircraft, Vehicles and Other Transportation had one of the lowest annual growth rates of 5 percent. Much of the growth in U.S. goods exports to China occurred in recent years. As shown in table 1, the overall annual growth rate during the second 5 years of the decade, 2000 to 2004, was 19 percent, as compared with 6 percent from 1995 to 1999. In fact, 7 of the 10 goods categories had higher growth rate for the most recent 5-year period than for the previous 5-year period. For example, Textiles and Apparel, Leather and Footwear, the bulk of which includes raw cotton (including yarn and woven fabric), had the highest growth rate during the recent 5-year period, or 45 percent annually from 2000 to 2004, as compared with a negative 20 percent growth rate in the previous 5 years. The goods category with the highest U.S. exports to China by value, Machinery, Electronics, and High-Tech Apparatus, grew at a rate of 19 percent during the most recent 5-year period versus 9 percent during the previous 5 years. U.S. export growth to China was also widespread at the more detailed subcategory level. We found that out of 99 subcategories, exports of 88 subcategories increased in value over the past decade. Comparing the average gain in export value between the first and second parts of the decade (1995 to 1999 and 2000 to 2004), we found that most of the value increase, 59 percent, was driven by the following 5 subcategories (see app. II, table 10, for details): Electric machinery, sound equipment, and television equipment increased an average of $2 billion per year, 18 percent of the total increase. Nuclear reactors, boilers, machinery increased an average of $1.8 billion per year, 15 percent of the total increase. Oil seeds, grain, seed, fruit and plant increased an average of $1.6 billion per year, 14 percent of the total increase. Iron and steel increased an average of $800 million per year, 7 percent of the total increase. Optic photo etc, medical or surgical instruments etc. increased an average of $700 million per year, 6 percent of the total increase. Far fewer subcategories, 9 out of 99, experienced declines in U.S. goods exports to China. Comparing the average loss in export value between the first and second parts of the decade (1995 to 1999 and 2000 to 2004) we found that 88 percent of the export decline was driven by the following 3 subcategories (see app. II, table 11, for details): Fertilizer declined an average of $560 million per year, 53 percent of the total decline. Cereals declined an average of $185 million per year, 18 percent of the total decline. Animal or vegetable fats, oils etc. and waxes declined an average of $176 million per year, 17 percent of the total decline. Both rapid economic growth in China and the removal and reduction of Chinese trade barriers have fueled demand for many types of goods in China, particularly raw materials and intermediate inputs for China’s booming manufacturing sector. First, U.S. exports have benefited from this increasing demand in China. For example, a Department of Agriculture report noted that U.S. cotton exports surged primarily because of the rapid growth of China’s textile and apparel industry, especially in 2005 after the expiration of U.S. and other WTO members’ import quotas under the WTO Agreement on Textiles and Clothing. Another contributing factor was cotton shortage following a poor Chinese harvest in 2003. In addition, U.S. exports of integrated circuits—a key component of consumer electronics, such as digital video disk players, cell phones, and global position system devices—have increased because China has become a major manufacturer of consumer electronics and China’s domestic production of integrated circuits can only meet around 10 percent to 15 percent of China’s demand, according to a Commerce trade expert. Exports of electronic circuits and micro assembling have grown at an annual rate of 47 percent since 1995, reaching over $2 billion in 2004. Economic growth in China also has fueled demand for energy-related equipment. For example, U.S. exports of gas turbine parts to China increased dramatically in recent years, reaching $266 million in 2004. Second, market liberalization has been a factor in the growth of U.S. goods exports to China. Reduction or removal of trade barriers has created opportunities for foreign exporters to access China’s market. The United States concluded bilateral negotiations with China on WTO accession in 1999. Then, as part of its 2001 WTO accession agreement, China committed to reducing or eliminating a variety of market access barriers to foreign products. In its agreement, China made specific commitments on the tariff rates for more than 7,000 products covering all imports as well as commitments on trade-distorting practices, such as state trading and quotas, affecting more than 900 products. By 2010, the end of the WTO commitment phase-in period, China’s overall average tariff is scheduled to be less than 10 percent. China has also committed to removing certain nontariff barriers, such as quotas and licensing, by 2005. China is a small but growing market for U.S. services. From 1995 to 2004, China moved from the 12th- to the 7th-largest recipient of services exports, and U.S. services exports to China increased both in dollar value and as a share of U.S. world exports. However, U.S. services exports are relatively small, $7 billion in 2004, compared with goods exports of $33 billion in 2004. Like goods, U.S. services exports to China grew faster than U.S. services exports to the rest of the world. The annual growth rates for exports varied widely among the 5 main services categories and their subcategories. The services categories that grew the fastest over the decade were Royalties and License Fees and Passenger Fares. Among the faster-growing subcategories were telecommunications and financial services, while other subcategories, such as business, professional, and technical services, experienced much slower growth. Also like goods, overall increases in U.S. services exports are likely due to the sharp growth of the Chinese economy and the resulting increased demand for services. Slower export growth in some categories may be due to relatively slower trade liberalization in China for services. For example, many WTO services commitments have only recently come into effect or are not yet scheduled to be phased in. Although still small, China is a growing market for U.S. services exports, which grew faster than U.S. services exports to the rest of the world from 1995 to 2004. In 2004, China received 2 percent of total U.S. services exports and was the 7th-largest market for U.S. services, behind the EU at 35 percent, Japan at 11 percent, Canada at 9 percent, Mexico at 6 percent, and South Korea and Switzerland each at 3 percent (see fig. 3). In 1995, China was the 12th-largest. Since then, China has surpassed Australia, Hong Kong, Taiwan, Brazil, and Singapore, which were among the top 12 markets for services exports in 1995. U.S. exports of services to China have grown significantly, with growth moderating somewhat in recent years. The annual growth rate for services exports to China was 10 percent from 1995 to 2004, which was slower than the growth rate for goods (about 13 percent) and faster than the overall growth rate for U.S. services exports to the world (3 percent). As shown in figure 4, from 1995 to 2004, U.S. services exports to China more than doubled in value from $3 billion to $7 billion, and China’s share of total U.S. services exports to the world also grew from 1 percent to over 2 percent. U.S. services exports grew faster in the first part of the period—services exports grew about 11 percent from 1995 to 1999 versus 6 percent from 2000 to 2004. While services exports grew overall over the past decade, growth rates varied among major categories and subcategories. From 1995 to 2004, U.S. services exports to China grew in each of the 5 main services categories: Travel, Passenger Fares, Other Transportation, Royalties and License Fees, and Other Private Services. (See app. III for a full description of these categories and their subcategories). As shown in table 2, Royalties and License Fees grew the fastest over time, with an annual growth rate of 22 percent from 1995 to 2004. While all services categories grew over time, growth rates varied from the first half of the 10-year period to the second half of the period. For example, services in the Other Transportation category had an annual growth rate of 11 percent for the whole period and declined by 4 percent from 1995 to 1999, but grew quickly at a rate of 26 percent from 2000 to 2004. The U.S. government also collects some data on sales of services to China at the subcategory level. As shown in table 3, all but 1 subcategory under the main category of Other Private Services grew overall from 1995 to 2004. Growth ranged from 24 percent for insurance services to 8 percent for the business, professional, and technical services category, and there was a 1 percent decline for Other Private Services. U.S. export growth rates under some of these subcategories also varied between the two periods. For example, exports of financial services grew 12 percent in the first period, from 1995 to 1999, and 27 percent in the second period, from 2000 to 2004. One large subcategory for services sales was education, which comprised about half the value of Other Private Services exports, $1.3 billion in 2004, and consists of tuition and living expenses of foreign students enrolled in U.S. colleges and universities. The U.S. government collects further data on the subcategory of business, professional, and technical services, which encompasses a number of key areas, including legal, advertising, and computer and information services. As shown in table 4, from 1995 to 2004, growth rates in exports in these subcategories varied from a 2 percent annual decline for construction, architectural, and engineering services, to a 21 percent annual growth for research and development and testing services. Comparing the first and second half of the decade in table 4, exports in these services appear to be somewhat volatile, since some industries experienced declines and then growth or vice versa during the two periods. However, because exports for subcategories under business, professional, and technical services are relatively low (less than $100 million in most cases) changes from one year to the next can drastically affect overall growth rates. Like goods, the growth in U.S. services exports to China over the past decade was driven in part by China’s sharp economic expansion. However, in contrast to goods, the overall value of services exports is smaller and individual categories and subcategories experienced more variable growth. The smaller value of U.S. services exports to China is consistent with world trade patterns with the United States. Trade in services tends to be smaller than trade in goods in part because many services require some type of direct contact or local presence and thus are harder to provide across international borders. In addition, China is following a common trend among developing countries, which usually have relatively small services sectors, according to the World Bank. However, the smaller overall value of services exports and variable growth rates among services categories may also be due to the fact that China’s services sector is yet to be fully liberalized. For example, most market access commitments for services are to be phased in by 2007. Finally, China’s export-oriented growth has depended on the industrial sector, rather than services. While China’s WTO commitments for services are to provide foreign services providers with increased access to a number of sectors, including business services, communications services, financial services, and tourism- and travel-related services, many of these commitments have only recently come into effect or are yet to be phased in. For example, under financial services, insurance was scheduled to be fully liberalized over a 5-year period, thereby relaxing many restrictions, such as selective licensing processes and geographic limitations. In another example, in the financial services area, U.S. banks will be able to provide local (Chinese) currency services without geographic and client limitations by 2006; and telecommunications is also scheduled to be liberalized by 2007, when foreign providers will be allowed to offer a broad array of telecommunications services with no geographic restrictions, although only through joint ventures with Chinese partners. International Trade Commission officials contend that joint venture requirements may, in part, explain the smaller value for U.S. services exports compared with goods exports to China. Office of the United States Trade Representative (USTR) officials noted that implementation problems have inhibited market access for services. However, a portion of U.S.-China services trade does not depend upon market access. For example, some U.S. services exports are delivered to Chinese residents by virtue of the Chinese residents traveling to the United States to consume these services. Thus, although exports in the subcategory of education are among the largest in terms of dollar value, a good portion of those exports are tuition and living expenses of foreign students enrolled in U.S. colleges. Similarly, a portion of telecommunications exports is international calls from China to the United States. Finally, a good portion of services trade depends not only upon exports, but also upon foreign direct investment and establishing commercial presence, or selling services through local U.S. affiliates. This is also the case in China. A Commerce trade expert told us that cross-border trade in services is small relative to the Chinese domestic market, since many services must be provided by firms located in China, and the real potential for growth in services trade is in affiliate sales. For example, while exports in telecommunications are growing, the real potential is in providing telecommunications services to China’s domestic market, such as improving telephone service, which requires setting up local operations. In another example, according to an official from the Coalition of Service Industries, business, professional, and technical services are especially important to coalition members as well as to the Chinese government because the Chinese are anxious to form joint ventures to increase their expertise. The official said that it took a smaller investment to locate services in China compared with a manufacturing plant, especially for financial services, where coalition members preferred to do business through joint ventures and branches. Although U.S. goods exports to China have grown rapidly, other countries, particularly a few Asian countries, have experienced even higher export growth to China, resulting in a drop in total U.S. market share of total world exports to China. Similarly, other large suppliers to China, such as the EU and Japan, have experienced declines in their shares, while certain Asian countries, such as South Korea and Taiwan have increased their shares of overall exports to China. The U.S. decline covers a wide range of products, including autos and parts, plastics and organic chemicals, and optical machinery. However, the United States did increase export share in other products, particularly agricultural goods, such as prepared meats and fish, and preserved fruits and vegetables. The U.S. share of exports to China has declined partly due to the rise of existing smaller exporters and partly due to the growing number of new countries exporting to China. In particular, as local Asian production processes became more integrated, other East Asian and Southeast Asian countries became larger suppliers of China’s growing manufacturing and assembly operations. Another reason for the declining market share was the quickly increasing Chinese imports of natural resources, such as petroleum, for which the United States is not a major supplier to China. Finally, other macroeconomic and industry- specific factors may have played a role in the declining U.S. market share. Although the value of U.S. goods exports to China has increased every year for the past 10 years, the U.S. share of total exports from all countries to China has declined. The U.S. share of world goods exports to China fell from about 12 percent in 1995 to about 9 percent in 2004, according to Chinese trade statistics. Figure 5 shows the values of world and U.S. goods exports to China as well as the U.S. percentage of world exports to China. As figure 5 shows, overall world exports to China have grown significantly, more than quadrupling from about $130 billion in 1995 to about $522 billion in 2004 (unadjusted for inflation), according to Chinese trade statistics. At the same time, other large suppliers to China also experienced declines in their share of world goods exports to China. Japan and the EU lost overall export share; although, like the United States, they had growing exports to China during the past decade. For example, Japan’s share fell from 22 percent to 18 percent and the EU’s share fell from 16 percent to 13 percent from 1995 to 2004. In contrast, other Asian countries, such as Indonesia, South Korea, Malaysia, Singapore, and Taiwan, increased their export share and, in some cases, surpassed the United States. Regionally, Indonesia, Malaysia, and Singapore are all members of the Association of Southeast Asian Nations (ASEAN), which have increasingly integrated their production and trade with China. The United States was the 3rd- largest exporter to China in 1995, but fell to the 6th-largest in 2004, as shown in figure 6. South Korea, Taiwan, and the ASEAN surpassed the United States and drew close to overtaking the EU by 2004. The remaining countries as a group increased their share of China’s market as well, from about 22 percent in 1995 to 24 percent in 2004. Russia was the 7th-largest exporter to China in 2004 (at about 2 percent), followed by Hong Kong and Australia. Japan and the EU have, since at least 1995, been larger suppliers to the Chinese market than the United States. Despite overall growth in exports to China, the United States lost share of world goods exports to China in 7 out of the 10 goods categories between the first and second half of the 10-year period (average annual share 1995 to 1999 versus 2000 to 2004), as highlighted in table 5. For example: Aircraft, Autos, and Other Transportation: U.S. share dropped 7 percentage points from 28 percent to 21 percent, even though U.S. exports grew annually on average by 5 percent between 1995 and 2004. Chemicals, Plastics, and Minerals: U.S. share fell 4 percentage points, although U.S. exports grew annually on average by 10 percent, between 1995 and 2004. Machinery, Electronics, and High-Tech Apparatus: U.S. share lost 3 percentage points, despite the fact that U.S. exports grew annually on average by 15 percent between 1995 and 2004, and this was the largest category by value of U.S. exports, $13 billion in 2004. At a more detailed subcategory level, changes in U.S. export share were mixed. Within the 10 goods categories, we found that a slight majority of individual subcategories (55 out of 99 subcategories) experienced a decline in the U.S. share of world exports to China. Some subcategories for which U.S. market share declined had high export values and were major contributors to export increases (see app. II, table 12, for the top 20 subcategories in terms of market share decline). For example: Oil seeds, with $2.4 billion in exports in 2004, lost 12 percentage points in share between the first and second half of the decade, although it contributed to 14 percent of the total export increase over the period. Optic, photo, medical, with $2 billion in exports in 2004, lost 6 percentage points in share between the first and second half of the decade, although it contributed to 6 percent of the total export increase over the period. The two subcategories that contributed the most to export increases between the first and second half of the decade also lost market share, but to a lesser extent. Electrical machinery, with $5 billion in exports in 2004, lost 2 percent market share, although it contributed to 18 percent of the total export increase over the period. This subcategory includes high-tech products, such as telecommunications equipment, fiber optics, and computer parts. Machinery (other than electrical), with $6 billion in exports in 2004, lost 2 percent market share, although it contributed to 15 percent of the total export increase over the period. Only categories related to agricultural goods have gained or maintained U.S. worldwide market share in China, with the exceptions previously noted. The largest overall increase was in the category of Prepared Food, Beverages, Spirits, and Tobacco, in which the United States increased its share of world exports to China by 3 percentage points, from 13 percent to 16 percent over the decade (see table 5). The U.S. export increase in this category moved the United States from the 4th-largest supplier to China in 1995 to the 2nd-largest supplier of that category in 2004, behind Peru (see fig. 7). According to the Department of Agriculture, exports to China from Peru are primarily fish meal. For the other 2 agriculture-related categories, the United States increased its share by about 1 percent in Animal and Plant Products, while its share remained relatively stable in Textiles, Apparel, Leather, and Footwear at about 7 percent over the decade, on average. (See app. II, table 13, for the 20 subcategories by increase in the U.S. share of world goods exports to China.) Again, at the more detailed subcategory level within these broad agricultural categories, changes in export share were mixed. Most subcategories under Prepared Food, Beverages, Spirits, and Tobacco have increased their world export share in China. For example: Prepared meats and fish increased by 20 percentage points, from 11 percent to 31 percent over the decade. Preserved foods, including a wide range of fruits, vegetables, nuts, and juices, grew by 14 percentage points, from 31 percent to 45 percent over the decade. However, some subcategories under animal and plant products experienced mixed results. For example: Lac, gums and resins grew in share over the decade from 16 percent to 32 percent; edible fruits and nuts increased from 7 percent to 15 percent. Edible vegetables and certain roots and tubers declined from 19 percent to 8 percent; animal or vegetable fats, oils etc. and waxes declined from 13 percent to 3 percent. Over the past decade, the U.S. share of world exports to China has declined for a variety of reasons. The loss in the U.S. share of worldwide exports to China has occurred as the volume of exports from smaller exporters has grown and the number of countries exporting to China has increased. These trends are due to multiple factors. For example, greater integration of Asian production processes has helped certain other Asian countries become larger suppliers of China’s growing manufacturing and assembly operations. Also, China’s resource-based imports, including those from oil- producing countries, rose sharply as its rapid economic growth increased demand for raw materials. In addition, macroeconomic factors that influence the overall U.S. trade deficit, including exchange rates with China and other Asian countries, as well as industry-specific circumstances, are likely to have affected the overall U.S. export share. Individual countries, particularly those from Asia, have increased their share of world exports to China at the expense of the United States and other large suppliers. South Korea, Taiwan, and ASEAN countries, including Indonesia, Malaysia, and Singapore, have all increased their export share overall and in many particular products (e.g., autos and parts, and electronics). There are several reasons why these countries may have increased their market share in China. Regional economies in Asia have increasingly integrated their production processes, producing parts for products in one location and shipping them to another for assembly. China has become a key component of this process due to its relatively low labor costs and large-scale production potential. For example, China is a major producer of electronic products, such as computers and video players, which may be sold domestically or exported worldwide. As an indication of this, China’s largest import and export subcategory is electrical machinery, which includes finished products, such as consumer electronics, as well as components used in their production, such as electronic circuits. In addition, this greater integration of production processes can affect trade trends. For example, U.S. components and parts exports first sent to other countries and incorporated into those countries’ exports to China are not captured in U.S.-China bilateral trade statistics. Trade statistics only show direct imports and exports between countries. For products that are produced by a process that involves a series of locations, trade statistics generally do not capture the value of inputs coming from locations other than the last point of export. Therefore, for some U.S.-made components, such as semiconductors, that are modified and added to in Malaysia, Singapore, and the Philippines before being sent to China for use in cell phones and computers, trade statistics do not generally count them as U.S. exports to China, but as U.S. exports to Singapore or the Philippines. Furthermore, as the number of countries that export to China expands, 1 export category that is particularly relevant is chemicals, plastics, and minerals, which had the 2nd-highest U.S. export value in 2004 and includes resource-based products, particularly petroleum products, which United States exports very little to China. All major suppliers lost overall export share in this category as a wider range of countries supplied exports in this category between 2002 and 2004. Petroleum products are one of the fastest- growing Chinese imports in recent years and accounts for a substantial share of overall Chinese imports. In fact, imports of crude oil from petroleum and bituminous minerals was the 2nd-largest Chinese import by value in 2004, worth about $34 billion and accounting for over 6 percent of total Chinese imports, according to Chinese statistics. Many oil- producing countries’ exports to China grew faster than the average annual growth rate of 27 percent for all countries from 2000 to 2004. Table 6 lists examples of oil-producing countries with high growth rates. Chad Congo Saudi Arabia Angola United Arab Emirates Finally, there are also a variety of other broad macroeconomic factors, such as exchange rates as well as industry-specific circumstances that may have affected the U.S share of world exports to China. For example, many economists believe the Chinese renminbi/yuan is undervalued relative to the U.S. dollar, and certain Asian currencies that experienced devaluations during the Asian financial crisis may therefore have a competitive advantage in China. A higher-valued dollar would be expected to have a dampening effect on U.S. exports to China by making U.S. exports more expensive in the Chinese market relative to other countries’ exports. Also, industry-specific circumstances may provide certain countries with export advantages over those of the United States. For example, some industries’ companies from other countries may have been operating in the Chinese economy for longer than U.S. companies and may be gaining in export share due to their greater experience. Also, according to one Commerce specialist, some countries have provided “tied-aid” to China in which they provide financial support for certain types of investments such as environment projects, if their companies, rather than U.S. companies, are able to supply the materials and components for the project. Also, in the agricultural sector, Chinese concerns about animal and plant diseases may restrict, at least temporarily, access of U.S. agricultural products and reduce U.S. export share. Through increased foreign direct investment in China, U.S. affiliate sales have exceeded U.S. exports to China, since U.S. companies have increasingly sold their goods and services directly to the Chinese market through their local affiliates. Although small, U.S. investment in China has been growing and investment levels have been similar to that of China’s other trading partners. U.S. companies generally concentrated their investments in China in the manufacturing sector, in industries such as transportation equipment, chemicals, and computers and electronic products. U.S. investment in China funds the creation of U.S. affiliates, who then sell in China and to other countries, including the United States, and U.S. affiliate sales of goods and services have become an important avenue for accessing the Chinese market. In fact, the value of U.S. affiliate sales in China has exceeded the value of U.S. exports to China since 2002. Factors such as the growing Chinese market, lower labor costs, and China’s WTO accession have allowed U.S. companies to increase their investment and sales in China, although some challenges remain. U.S. foreign direct investment in China, although relatively small as a share of total U.S. foreign investment, has grown and is concentrated mainly in manufacturing (see fig. 8). In 2004, China ranked 12th as a recipient of U.S. investment, and the cumulative stock of U.S. investment in China was $15 billion. This was a relatively small amount compared with U.S. investment going to other major U.S. trading partners. For example, in 2004, the cumulative stock of U.S. investment in the EU, Canada, and Japan was $952 billion, $217 billion, and $80 billion, respectively. However, U.S. investment in China has grown. For example, the cumulative stock of U.S. investment in China grew from $2 billion in 1995 to $15 billion in 2004. In addition, according to Chinese statistics, U.S.-realized annual investment flows into China grew at about 6 percent annually from 1995 to 2003. This growth rate matched that of the EU (also 6 percent) and slightly exceeded that of Japan (4 percent) during the same period. U.S. investment levels in China were similar to those of its other trading partners. According to Chinese statistics, in 2003, 8 percent of China’s realized annual investment flows came from the United States, compared with 7 percent from the EU, 8 percent from Korea, 9 percent from Japan, 6 percent from Taiwan, and 5 percent from the ASEAN. While Chinese statistics show Hong Kong as providing 34 percent of investment in China, this figure may be inflated by mainland Chinese investment going through Hong Kong. In 2003, the United States was the 5th-largest investor with $4 billion of realized annual investment flows in China, behind Hong Kong, the British Virgin Islands, Japan, and Korea (see fig. 9). Realized investment represents the actual annual flow of investment. For example, in a 3-year contract to invest $100 million, the realized annual investment might be $25 million the 1st year, $50 million the 2nd year, and $25 million the 3rd year. Total foreign direct investment in China in 2003 was $53.5 billion. BEA classifies data on U.S. investment abroad by industry, using a classification system that is based on sales information, which companies report in surveys. BEA collects data on foreign affiliates’ production of goods, sales of goods, and services. The major industries for which it collects data include, among others, mining, utilities, and manufacturing and wholesale trade. In 2004, U.S. investment in China was concentrated in manufacturing, with over half of the cumulative stock of U.S. investment in China, or $8.2 billion. Other broad industry groupings in which U.S. companies invested their cumulative stock in China in 2004 included wholesale trade at $1.8 billion, and mining at $1.7 billion (see app. IV, table 15). BEA also classifies data on U.S. investment by subcategories in the manufacturing sector. Examples of U.S. investment in manufacturing subcategories included $1.8 billion in the transportation equipment industry, $1.6 billion in the chemicals industry, and $1.3 billion in the computers and electronic products industry (see app. IV, table 16). With their $15 billion investment in Chinese manufacturing and other sectors, U.S. companies have established local affiliate companies that increasingly sold their goods and services to the Chinese market. In 2003, U.S. majority-owned affiliate companies sold about 75 percent of their goods and services, or about $38 billion, in China to the Chinese market. The remaining 25 percent of goods and services were sold to other countries. These sales included goods and services that were exported back to the United States, about 7 percent of the total. This indicates that most U.S. companies’ investment is meant to access the Chinese market, rather than using China to provide goods and services back to the U.S. market. By 2002, U.S. companies sold more goods and services in China through their local affiliates than they exported from the United States. By 2003, U.S. affiliates in China sold about $38 billion in goods and services to China, while U.S. exports to China that same year were $35 billion. The annual growth rate of affiliate sales was 33 percent between 1995 and 2003, compared with 11 percent for U.S. exports. Figure 10 shows affiliate sales from 1995 to 2003 and exports from 1995 to 2004. For goods alone, U.S. affiliate sales in China have also surpassed U.S. exports since 2001 (see fig. 11). In 2003, U.S. affiliates sold about $34 billion in goods to the Chinese market, while U.S. companies sold about $29 billion through exports that same year. The annual growth rate of affiliate sales of goods was 33 percent between 1995 and 2003, compared with 13 percent for U.S. exports of goods. Unlike goods sales, U.S. companies export more services to China through cross-border trade than they sell through affiliates. U.S. affiliates’ services sales in China were at levels about two-thirds that of U.S. services exports in 2003, that is, $4 billion in affiliate sales versus $6 billion in exports (see fig. 12). U.S. affiliates’ services sales in China were growing faster—that is, the annual growth rate of affiliate sales of services was 36 percent between 1995 and 2003, compared with 10 percent for U.S. exports of services. However, services are not a major component of sales by U.S. affiliates in China and accounted for about 10 percent of all sales by U.S. affiliates located in China. The majority of local sales through U.S. affiliates, 67 percent in 2003, were in the broad industrial grouping of manufacturing, with over half of these sales in the computers and electronic products industry (see app. IV, table 18). The other notable industry was wholesale trade, with at about 20 percent of all local sales by affiliates. Other smaller industries included professional, scientific, and technical services and utilities, with both accounting for about 1 percent of total sales (see app. IV, table 17). China’s growing domestic market, improving productivity, low labor costs, and improving infrastructure has made it an increasingly attractive investment venue for U.S. companies. The U.S.-China Business Council reported in 2004 that China’s implementation of its WTO commitments has increased foreign investors’ ability to expand their operations. The trend of U.S. affiliate sales surpassing U.S. exports to China is consistent with the pattern of U.S. trade and investment worldwide. U.S. multinational companies typically sell their goods and services directly through their foreign affiliates. However, some experts believe that U.S. investment and affiliate sales in China are particularly important. According to a 2004 Center for Strategic and International Studies report, China is an attractive venue for foreign investment due to its large domestic consumption market, improved productivity, better infrastructure, and higher technology standards and quality control. Morgan Stanley reported in 2002 that sales of foreign affiliates, rather than exports, are rapidly becoming the primary means by which U.S. products are delivered to the Chinese; therefore, U.S. export figures do not fully capture the true level of U.S. commercial sales in China. The report said that China’s massive consumer and labor markets set it apart from the rest of the world, and many U.S. firms have no choice but to “be on the ground there.” A Coalition of Service Industries official predicted that U.S. affiliate sales of services would continue to rise and eventually overtake service exports, especially in financial services in which coalition members to do business through joint ventures and branches. The Center for Strategic and International Studies reported in 2004 that increasing total foreign direct investment in China indicates that investment is targeting the domestic market for the long term, rather than focusing on China as an export platform. According to the International Trade Commission, however, for some countries, such as Asian countries, investing in China as an export platform is still an important part of their foreign direct investment. However, there are some factors that limit or discourage U.S. companies from investment in China. For example, the Chinese government is still undergoing market-oriented reforms, such as phasing in many of its WTO commitments for services; these reforms affect how U.S. companies access the market for services in China–-which may explain why sales of services through affiliates lagged behind exports of services. According to USTR’s December 2004 report, China’s opaque regulatory process and burdensome licensing and operating requirements continue to frustrate efforts of U.S. services providers in a number of industries. For example, the report cited excessive capital requirements for U.S. companies in insurance, banking, and telecommunications sectors, among others, that might prevent U.S. services companies from operating in China. This analysis reflects a range of issues affecting U.S.-China trade and investment, including broad factors, such as China’s economic development and exchange rate regime, and more narrowly focused sector- specific aspects such as China’s growing importance as a market for U.S. goods and services, the rise in integrated production among China’s regional trading partners, China’s increasing demand for oil, and China’s growing role as an attractive venue for foreign direct investment. Beyond this broad overview, further study may be warranted in particular areas, as follows: The strong, overall growth in U.S. exports over the past decade shows that some U.S. companies are successfully selling their goods and services to China. This growth is despite the current trade frictions between the two countries, which include allegations that China has not been meeting all of its WTO commitments and that China is limiting market access for U.S. companies. An in-depth analysis of the relationship between U.S. export growth in China and China’s WTO commitments might provide useful insights about the impact of China’s reforms. U.S. services exports to China, although growing, are still relatively small and undeveloped, in part because most WTO commitments for services either have just recently come into effect or will do so in the near future, and in part because of various Chinese restrictions. Therefore, it may be best to observe the pattern of U.S. services exports when WTO commitments have been phased in and given more time to be implemented. Integrated production, as China’s neighbors, including Japan and South Korea, use China as an assembly and export platform for their products, has implications for the nature of the U.S. trade deficit with China. Examining integrated production, particularly for China’s largest import and export category machinery, electronics, and high-tech apparatus, in which the United States lost export market share in China to other countries, may shed light on reasons for the U.S. trade imbalance with China. The rise in U.S. affiliate sales relative to U.S. exports also has implications for U.S. companies’ ability to access the Chinese market. For example, declines in U.S. vehicle exports to China appear to be offset by U.S. companies’ shifting production to China and selling vehicles directly to the Chinese market through their affiliates. Exploring the extent and reasons why affiliate sales have replaced exports as a means of accessing the Chinese market may be useful. The significance of exchange rates to the U.S.-China trade relationship is likely to remain an issue of continuing policy interest. While China’s exchange rate has changed only slightly since the Chinese government announced a policy modification in July, further changes could, over time, provide additional opportunities to study the relationship between exchange rates and trade patterns. We provided USTR, the Departments of Agriculture and Commerce, and the International Trade Commission with a draft of this report for their review and comment. All four agencies chose to provide technical comments from their staff. Their comments focused on descriptions of data sources, methodologies for computing data, and explanations of trends in the data. We modified the report in response to their suggestions. We will send copies of this report to the appropriate congressional committees, the U.S. Trade Representative, the Departments of Agriculture and Commerce, and the International Trade Commission. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions concerning this report, please contact me at (202) 512-2717 or at yagerl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. A GAO contact and staff acknowledgments are listed in appendix V. As part of a long-term body of work that the Chairman and the Ranking Minority Member of the Senate Committee on Finance as well as the Chairman and the Ranking Minority Member of the House Committee on Ways and Means requested, we (1) analyzed U.S. goods exports to China and how they have changed over time, (2) analyzed U.S. services exports to China and how they have changed over time, (3) assessed how U.S. exports to China have fared against other major trading partners’ exports to China, and (4) analyzed U.S. investment and affiliate sales in China. In order to examine U.S. goods exports to China, we collected annual U.S. export statistics from 1995 to 2004 at various levels of detail from the Department of Commerce, U.S. Census Bureau. We used U.S. Bureau of Labor Statistics export price deflators to deflate the U.S. trade data so that all of the values reported are in 2004 constant dollars. We did not deflate the foreign direct investment data since we report U.S. investment abroad on an historical cost basis, which reflects the prices of earlier periods in which the investment was made and represents the accumulated stock of these investments. Although Commerce’s U.S. Bureau of Economic Analysis (BEA) calculates investments at current prices for aggregate investment (which could then be deflated), it does not do so on a bilateral basis. Therefore, we do not have an appropriate way to adjust these values for any potential effects of inflation. We used the U.S. harmonized tariff schedule (HTS) chapter level deflators when available; otherwise, we used the corresponding section deflators. For the few cases where section deflators were missing as well, we used the general export price deflator. To facilitate a broad analysis of U.S. exports to China, we grouped imports and exports into higher-level categories of goods on the basis of the HTS product codes and our discussions with tariff experts. We took the 22 section headings of the HTS and grouped them into 10 broader categories. Table 7 shows these groupings in detail. We studied the composition of exports by grouping products into 10 categories. We compared the annual growth rate of U.S. exports to China with U.S. exports to the world from 1995 to 2004 and also from 2001 to 2004 at the overall level and the products level. We identified products with the highest growth and decline in terms of both value and percentage. We (1) analyzed the importance of China as an export market for the United States by looking at the share of exports going to China at various HTS levels and (2) identified products for which exports to China accounted for a significant share. We interviewed a range of government and industry experts to obtain contextual information on selected export products we identified in our data analysis. Throughout this report, we relied on secondary sources and did not independently review Chinese law and regulations to determine their effect on sales and investment. To analyze U.S. services exports to China, we collected annual U.S. export statistics from 1995 to 2004 from BEA. We used BEA price indices for major service categories from the National Income and Product Accounts to convert the trade data so that all values reported are, unless noted otherwise, in 2004 constant dollars. We identified the importance of China as a U.S. services export destination by looking at China’s overall share of U.S. services exports. We calculated the growth rates for services categories and subcategories using a log function, so that we could examine the categories with the highest growth. In some cases, when BEA did not publish values for some categories in some years, we estimated the missing values and based the growth rate on those estimations. We also interviewed several government and industry experts to obtain contextual information about services trade with China. To assess how U.S. exports to China fared against other major trading partners’ exports to China, we collected official Chinese trade statistics from Global Trade Information Services, a licensed contractor with the Chinese government. We reviewed these data, including comparing them with comparable United States, European Union (EU), and Japanese trade statistics. These data differ from the U.S. statistics, particularly on the Chinese export side, given that a portion of China’s trade passes through Hong Kong before going to its ultimate destination. However, after reviewing the academic literature on this discrepancy and general reviews of China’s statistics, we found that the Chinese import statistics (which we use in this report) should generally record the country of origin of its imports accurately, despite whether products first transit through Hong Kong. On the basis of our review of these statistics and the literature, we found that these data were sufficiently reliable for comparing the relative U.S. share of world exports to China over time to other major trading partners, such as the EU and Japan. We analyzed the U.S. share of world exports to China from 1995 to 2004 at a broad GAO category level (see table 7) as well as at a more detailed subcategory level. We compared the trends over two, 5-year periods (as discussed in this report). In addition, we also compared the trends over 3-year periods to confirm that the results still generally held. We calculated the average share across these time periods by taking the average of the share in each of the years (e.g., the average of the share for 1995, 1996, 1997, 1998, and 1999 in order to get the average share in 1995 to 1999). However, we also calculated the weighted average share across these time periods (using the value of trade as weights) in order to confirm that our results still generally held. When we report the value of Chinese trade flows (as opposed to the share of China’s trade) using Chinese trade statistics, it is in nominal dollars. We were not able to identify an appropriate trade deflator for Chinese statistics in order to remove the potential effect of inflation or deflation on the values. Finally, we also interviewed industry experts from the Departments of Agriculture and Commerce, and the International Trade Commission, and reviewed government and academic reports to obtain contextual information about the trends in U.S. share of world exports to China. To study growth in affiliate sales and foreign direct investment, we collected annual U.S. affiliate sales statistics from 1995 to 2003 and investment statistics from 1995 to 2004 from BEA. We identified China’s importance as a foreign direct investment destination by looking at China’s overall share of U.S. foreign direct investment, and examined the United States’s importance as a foreign direct investor in China by looking at the overall share of U.S. foreign direct investment in China compared with other countries. We calculated the growth rates for investment categories and subcategories using a log function, so that we could examine which categories had the highest growth and decline both in terms of value and percentage. BEA reports data on foreign direct investment on an historical cost basis. We evaluated the growth in affiliate sales of goods and services and compared these values to the growth in cross-border trade in goods and services. We calculated the growth rates for affiliate sales categories and subcategories using a log function. In some cases, when BEA did not publish values for some categories in some years, we estimated the missing values and based the growth rate on those estimations. In one case, for foreign direct investment in the finance and insurance category, we had to use an average annual growth rate instead of a log function because BEA reported the value of sales as a negative number. We used Chinese Consumer Price Index data to deflate the affiliate sales to 2004 constant dollars. We also interviewed several government and industry experts to obtain contextual information about U.S. investment in and affiliate sales to China. For each of the data sets that we used, we examined the data and found them sufficiently reliable for the purposes of our report. In addition, although Chinese data on U.S. exports have limitations and differ from U.S. statistics, we found them to be sufficiently reliable to present individual countries’ relative shares of China’s trade. We performed our work from April 2005 to December 2005 in accordance with generally accepted government auditing standards. U.S. exports of goods to China have grown rapidly over the past decade, while the U.S. share of world exports to China has fallen during the same period. These trends are evident at a broad category level and at a more detailed subcategory level. This appendix provides detailed information on these changes as well as selected information on particular products. Specifically, the tables that follow provide additional information on subcategories of products. Table 8 lists subcategories whose exports to China accounted for more than 10 percent of total U.S. exports in that category in 2004; table 9 lists subcategories whose exports to China accounted for less than 1 percent of total U.S. exports in 2004; table 10 list subcategories with the largest increases in export value over the decade; table 11 lists subcategories with the largest declines in export value; table 12 lists subcategories with the largest declines the U.S. share of world exports to China; and table 13 lists subcategories with the largest increases in the U.S. share of world exports to China. Finally, table 14 provides complete data on U.S. exports to China and the U.S. share of world exports to China for the 10 broad categories of goods and their 99 subcategories. China is a major market for some U.S. goods exports. Table 8 lists the products for which exports to China accounted for more than 10 percent of U.S. exports in that category in 2004. In particular, there were four products for which more than one-quarter of total U.S. exports were destined for China in 2004. China is an insignificant market for some other U.S. goods exports. Table 9 lists products for which less than 1 percent of U.S. exports went to China in 2004. Some of these products are among major U.S. exports, such as vehicles and pharmaceutical products, which had an overall export of $67 billion and almost $19 billion in 2004, respectively. U.S. goods exports to China for the majority, 88 out of 99 goods subcategories, increased in value over the past decade. High-value subcategories, such as electric machinery, and those experiencing the highest growth rates, such as oilseeds, were the main contributors to the overall export value increase. As shown in table 10, the top 5 subcategories accounted for close to 60 percent of the total export increase for the period. Far fewer, 9 of 99 of the goods subcategories, experienced declines in export value in the past decade. Fertilizer experienced the highest decline, with the annual average dropping $560 million for the second 5 years from the previous 5 years, accounting for more than half of the total export decline. U.S. share of world goods exports to China declined overall from 1995 to 2004. Table 12 shows the top 20 products that experienced declines in share between 1995 to 1999 and 2000 to 2004, as well as information on the overall value of U.S. exports to China in 2004. Carpets and other textile floor coverings experienced the largest decline of any subcategory, falling from 42 percent of the world exports to China between 1995 and 1999, to only 13 percent between 2000 and 2004, on average. U.S. exports to China of carpets and other textile floor coverings in 2004 were $4 million. However, the table shows that a range of products experienced declines, including tobacco, toys, some agricultural products, and fertilizers. Although, as previously discussed, 55 of the 99 subcategories of U.S. exports to China experienced a decline in the share of world exports to China, 42 subcategories experience a rise from the first half of the decade (1995 to 1999) to the second half (2000 to 2004). Table 13 shows the top 20 of these 42 categories that experienced increases in share. Miscellaneous edible preparations—which are types of prepared foodstuffs, including certain products, such as mayonnaise, mixed seasonings, and certain sauces and syrups—experienced the largest increase over the decade, growing from a 14 percent share to a 40 percent share of world exports, on average. Once again, the subcategories that experienced increases covered a wide range of products, including gums and resins, printed books and newspapers, and aircraft. However, many of these, as shown in table 13, were agricultural products. Finally, table 14 provides information across all 10 broad categories and the 99 subcategories that comprise them on the average annual growth of U.S. exports to China from 1995 to 1999, 2000 to 2004, and over the whole decade of 1995 to 2004. The table also shows U.S. exports to China in 2004 and the average U.S. share of world exports to China in 1995 to 1999, 2000 to 2004, and over the whole time period. Finally, the table shows the difference between U.S. share of exports in 1995 to 1999 and 2000 to 2004. Oil Seeds, Etc.; Misc Grain, Seed, Fruit, Plant, Etc. Lac; Gums, Resins, and Other Vegetable Sap and Extract Vegetable Plaiting Materials and Products NESOI Animal or Vegetable Fats, Oils. Etc. and Waxes (2) Group 2: Prepared food, beverages, spirits, and tobacco Edible Preparations of Meat, Fish, Crustaceans, Etc. Average (1995-1999) Average (2000-2004) (2004) (A) (B) change (B) minus (A)(8) (2) (9) (1) (11) (6) (8) (12) (5) (10) (1) (4) (18) (4) 0 (Continued From Previous Page) growth (1995-2004) Mineral Fuel, Oil, Etc.; Bitumin Substances; Mineral Wax Inorganic Chemicals; Precious and Rare-Earth Metals and Radioactive Compounds Fertilizers Tanning and Dye Ext, Etc.; Dye, Paint, Putty, Etc.; Inks (13) Essential Oils, Etc.; Perfumery, Cosmetic, Etc. Preparations Soap, Etc.; Waxes, Polish, Etc; Candles; Dental Preparations Albuminoidal Substances; Modified Starch; Glue; Enzymes Explosives; Pyrotechnics; Matches; Pyro Alloys, Etc. Group 4: Wood and paper products Wood and Wood Articles; Wood Charcoal Articles of Straw, Esparto, Etc.; basket ware and wicker ware Wood Pulp, Etc.; Recovered (Waste and Scrap) Paper and Paperboard Paper and Paperboard and Articles (Including Paper Pulp Articles) Printed Books, Newspapers, Etc.; Manuscripts, Etc. Group 5: Textiles, apparel, leather, and footwear Raw Hides and Skins (No Fur skins) and Leather Leather Articles; Saddlery, Etc.; Handbags. Etc,; Gut Articles Fur skins and Artificial Fur and Articles Silk, Including Yarns and Woven Fabric Wool and Animal Hair, Including Yarn and Woven Fabric (7) Average (1995-1999) Average (2000-2004) (2004) (A) (B) change (B) minus (A)(1) (2) (2) (10) (10) (1) (2) (2) (5) (6) (1) (7) (1) (Continued From Previous Page) growth (1995-2004) Manmade Filaments, Including Yarns and Woven Fabrics Manmade Staple Fibers, Including Yarns and Woven Fabrics Wadding, Felt, Etc.; Special Yarn; Twine, Ropes, Etc. Carpets and Other Textile Floor Coverings (8) Special Woven Fabrics; Tufted Fabric; Lace; Tapestries, Etc. Impregnated, Etc. Textile Fabrics; Textile Articles For Industry Apparel Articles and Accessories, Knit or Crochet Apparel Articles and Accessories, Not Knit, Etc. Textile Art NESOI; Needlecraft Sets; Worn Textile Articles Footwear, Gaiters, Etc. and Parts (1) Umbrellas, Walking-Sticks, Riding-Crops, Etc, Parts Prepared Feathers, Down, Etc.; Artificial Flowers; H Hair Articles Group 6: Glassware, precious metals and stones, jewelry Articles of Stone, Plaster, Cement, Asbestos, Mica, Etc. Average (1995-1999) Average (2000-2004) (2004) (A) (B) change (B) minus (A)(29) (1) (2) (3) (5) (3) (2) (2) (2) (4) (1) (4) (5) 1 (Continued From Previous Page) growth (1995-2004) Group 8: Machinery, electronics, and high-tech apparatus Tools, Cutlery Etc. of Base Metal; Parts Nuclear Reactors, Boilers, Machinery, Etc.; Parts Electric Machinery, Etc.; Sound and TV Equipment; Parts Optic, Photo, Etc. Medical or Surgical Instruments, Etc. Group 9: Aircraft, autos, and other transportation Railway or Tramway Stock, Etc.; Traffic Signal Equipment Vehicles, Except Railway or Tramway, and Parts, Etc. Aircraft and Spacecraft; Parts Ships, Boats, and Floating Structures Group 10: Miscellaneous manufacturing Clocks and Watches; Parts Musical Instruments; Parts and Accessories Arms and Ammunition; Parts and Accessories (13) Furniture; Bedding, Etc.; Lamps NESOI, Etc.; Prefabricated Bedding Toys, Games and Sport Equipment; Parts and Accessories (13) Average (1995-1999) Average (2000-2004) (2004) (A) (B) change (B) minus (A)(2) (2) (6) (6) (4) (1) (4) (3) (12) (6) HTS is the U.S. Harmonized Tariff Schedule. We adjusted these data for inflation and expressed them in 2004 constant dollars. HTS chapter 77 is reserved for future use. Chapter 99 consists of temporary legislation, temporary modifications, proclaimed pursuant to trade agreements legislation, and additional import restrictions proclaimed pursuant to section 22 of the Agricultural Adjustment Act, as amended. The percentage point change is derived by subtracting the value in column (B) from the value in column (A). Goods and services purchased by U.S. persons traveling abroad and by foreign travelers in the United States. Fares received by U.S. air carriers from foreign residents for travel between the United States and foreign countries and between two foreign points. U.S. international transactions arising from the transportation of goods by ocean, air, land, pipeline, and inland waterway carriers to and from the United States and between two foreign points. Transactions with nonresidents that involve patented and unpatented techniques, processes, and formulas, as well as trademarks, copyrights, franchises, broadcast rights, and other intangible rights, including rights to distribute, use, and reproduce general-use computer software. Expenditures for tuition and living expenses by foreign students enrolled in U.S. colleges and universities and by U.S. students for study abroad. Funds management, credit card services, fees and commissions on transactions in securities, implicit fees paid and received on bond trading, fees on credit-related activities, and other financial services. Investment income of insurance companies on funds that are treated as belonging to policyholders, and auxiliary services such as agents’ commissions, actuarial services, insurance brokering, and agency services. Receipts and payments between U.S. and foreign communications companies for the transmission of messages between the United States and other countries. Expenditures (except employee compensation) by foreign governments in the United States for services such as maintaining their embassies and consulates, as well as expenditures by international organizations headquartered in the United States and of foreign residents employed temporarily in the United States. Computer and data processing services and database and other information services. Management services, except management of health care facilities, and consulting services, except consulting engineering services related to actual or proposed construction projects and computer consulting, and public relations services, except those that are an integral part of an advertising campaign. Commercial and noncommercial research, product development services, and testing services. Rentals for computer and data processing equipment, and transportation equipment (such as ships, aircraft, railway cars, etc.) without crew or operators. Examples include accounting, auditing, and bookkeeping, advertising, construction, and legal services. From 1995 to 2004, U.S. foreign direct investment in all industries in China grew at about a 21 percent average annual rate. BEA classifies U.S. foreign investment into broad industry groups, such as wholesale trade, manufacturing, mining, finance, and utilities. Growth among industries varied, as shown in tables 15 and 16. In 2003, U.S. affiliates in China sold about $38 billion in goods and services to the Chinese market. BEA collects data on U.S. affiliate sales under generally the same industry groups as foreign direct investment, as shown in tables 17 and 18. These industry groups may include operations that produce goods and services. For example, although U.S. affiliates in the manufacturing industry generally produce goods, they may also produce some services. In addition to the individual named above, Adam Cowles, Assistant Director; Ming Chen; Leah DeWolf; Grace Lui; Jamie McDonald; Yesook Merrill; Nina Pfeiffer; Paul Revesz; Tim Wedding; and Seyda Wentworth also provided assistance. | China is important to the global economy and a major U.S. trading partner. By joining the World Trade Organization (WTO) in 2001, China pledged to further liberalize its trade regime and follow global trade rules. While U.S.-Chinese commercial relations have expanded, controversies have emerged, including the size and growth of the U.S. trade deficit with China, China's lack of intellectual property protection, and China's implementation of its WTO obligations. Despite these challenges, China's vast consumer and labor markets present huge opportunities for U.S. exporters and investors. GAO (1) analyzed U.S. goods and services exports to China, (2) assessed how U.S. exports to China have fared against those of other major trading partners, and (3) analyzed U.S. investment and affiliate sales in China. We provided the Office of the U.S. Trade Representative, the Departments of Agriculture and Commerce, and the International Trade Commission with a draft of this report for their review and comment. These agencies chose to provide technical comments from their staff. We incorporated their suggestions as appropriate. China is a rapidly growing market for U.S. goods and services. Although still small, accounting for only 4 percent of U.S. goods exports in 2004, U.S. goods exports to China tripled, from $11 billion to $33 billion, and increased across virtually all major categories from 1995 to 2004. Over the same period, China went from the ninth-largest to the fifth-largest U.S. market for goods behind Canada, the European Union, Mexico, and Japan. Although smaller, U.S. services exports grew from $3 billion to $7 billion, from 1995 to 2004. Economic growth in China and liberalization of its market, including joining the WTO, are among the factors driving the impressive export growth. Despite rapid growth, U.S. goods exports to China have not kept pace with those of other countries, particularly exports from Asia. The U.S. share of world goods exports to China declined from 12 percent to 9 percent, from 1995 to 2004, while South Korea and Taiwan's shares increased and at times surpassed that of the United States. The decline is partly due to increased integrated production among China's neighbors; growing resource-based exports, such as oil, from smaller countries; and macroeconomic factors, including exchange rates. Sales to China by U.S. affiliates located in China grew faster and exceeded U.S. exports to China in 2003, $38 billion versus $35 billion, while U.S. foreign direct investment grew from $2 billion to $15 billion from 1995 to 2004. Growth in U.S. investment and affiliate sales, particularly for goods, is due at least in part to China's attraction as a growing economy, including its burgeoning domestic market, high productivity and low labor costs, and developing infrastructure. |
In fiscal year 2010, DOD offered health care to almost 9.7 million eligible beneficiaries through its TRICARE program. TRICARE is organized into three regions, and within these regions, beneficiaries may obtain health care from either providers at military treatment facilities or civilian providers. TRICARE provides three basic options for its non-Medicare-eligible beneficiary population. These options vary according to TRICARE beneficiary enrollment requirements, the choices TRICARE beneficiaries have in selecting civilian and military treatment facility providers and the amount TRICARE beneficiaries must contribute towards the cost of their care. (See table 1.) TRICARE also offers other options, including TRICARE Reserve Select, a premium-based health plan that certain Reserve and National Guard servicemembers may purchase. Under TRICARE Reserve Select, beneficiaries may obtain health care from either nonnetwork or network providers, similar to beneficiaries using TRICARE Standard or Extra, respectively, and pay lower cost-shares for using network providers. TRICARE is a regionally structured program that is organized into three main regions—North, South, and West. (See fig. 1 for the location of the three regions.) TMA manages civilian health care in each of these regions through contractors. As of March 2011, the second generation of TRICARE contracts were in operation, and TMA was in the process of awarding the third generation of contracts. The contractors are required to establish and maintain adequate networks of civilian providers within designated locations referred to as Prime Service Areas. In these areas, civilian provider networks are required to be large enough to provide access for all TRICARE beneficiaries, regardless of enrollment status or Medicare eligibility. These civilian provider networks are also required to meet specific access standards for TRICARE Prime beneficiaries—such as for travel times or wait times. However the access standards do not apply to beneficiaries using options other than TRICARE Prime, such as TRICARE Standard or Extra. The contractors are also responsible for helping TRICARE beneficiaries locate providers and for informing and educating TRICARE beneficiaries and providers on all aspects of the TRICARE program. In addition, they provide customer service to any TRICARE beneficiary who requests assistance, regardless of their enrollment status. TMA has a TRICARE Regional Office in each region that helps to manage health care delivery. These offices are responsible for overseeing the contractors, including monitoring network quality and adequacy and customer-satisfaction outcomes. Similar to the contractors’ efforts, these offices provide customer service to all TRICARE beneficiaries who request assistance, regardless of their enrollment status. Civilian providers must be TRICARE-authorized to be reimbursed for care under the program. Such authorization requires a provider to be licensed by their state, accredited by a national organization, if one exists, and meet other standards of the medical community. There are two types of authorized civilian providers—network and nonnetwork providers, and both types of providers may accept TRICARE beneficiaries as patients on a case-by-case basis, regardless of enrollment status. Network providers are TRICARE-authorized providers who enter into a contract with the regional contractor to provide care to TRICARE beneficiaries and agree to accept TRICARE reimbursement rates as payment in full. By law, TRICARE reimbursement rates for civilian providers are generally limited to Medicare rates, but network providers may agree to accept lower reimbursements as a condition of network membership. Network providers are not obligated to accept all TRICARE beneficiaries seeking care. For example, network providers may decline to accept TRICARE beneficiaries as patients because their practices do not have sufficient capacity or for other reasons. Nonnetwork providers are TRICARE-authorized providers who have not entered into a contractual agreement with a contractor to provide care to TRICARE beneficiaries. Nonnetwork providers may accept the TRICARE reimbursement rate as payment in full or they may charge up to 15 percent above the reimbursement amount. The beneficiary is responsible for paying the extra amount billed in addition to the required cost-shares. Claims data from fiscal years 2006 through 2010 show that overall TRICARE claims paid to civilian providers have increased by more than 50 percent, even though the eligible population increased by less than 6 percent. (See table 2.) Between fiscal years 2006 through 2010, TRICARE Standard and Extra beneficiaries’ use of network providers—as measured by the number of claims paid to network providers—has increased significantly, while their use of nonnetwork providers—as measured by the number of claims paid to nonnetwork providers—has slightly decreased. (See fig. 2.) Specifically, their use of network providers has increased more than 66 percent between fiscal years 2006 and 2010, compared to about a 10 percent decrease in the use of nonnetwork providers over the same time period. Reimbursement rates have been cited as the primary impediment that hinders beneficiaries’ access to civilian health care and mental health care providers under TRICARE Standard and Extra. TMA can increase reimbursement rates in certain circumstances when a need has been demonstrated. Although national and local shortages of certain types of providers have also been cited as an impediment to TRICARE Standard and Extra beneficiaries’ access to civilian providers, TMA is limited in its ability to address this impediment as it affects the general population and not just TRICARE beneficiaries. Additionally, beneficiaries’ access to mental health care is affected by provider shortages and other issues and is of particular concern because the stress of deployment and redeployment has increased the demand for these services. Since TRICARE was implemented in 1995, some civilian providers—both network and nonnetwork—have expressed concerns about TRICARE’s reimbursement rates. For example, in 2006, we reported that both network and nonnetwork civilian providers said that TRICARE’s reimbursement rates tended to be lower than those of other health plans, and as a result, some of these providers had been unwilling to accept TRICARE Standard and Extra beneficiaries as patients. More recent studies by TMA and others have cited TRICARE’s reimbursement rates as the primary reason civilian providers may be unwilling to accept these beneficiaries as patients, for example: TMA’s first multiyear survey of civilian providers (2005 through 2007) showed that TRICARE’s reimbursement rates were the primary reason cited by providers for not accepting TRICARE Standard and Extra beneficiaries as new patients. Similarly, results from the first 2 years (2008 and 2009) of TMA’s second multiyear provider survey showed that the responding providers cited TRICARE’s reimbursement rates as one of the primary reasons that they would not accept new TRICARE patients even though they would accept new Medicare patients. In a 2008 study on civilian providers’ acceptance of TRICARE Standard and Extra beneficiaries, CNA reported that the medical society officials and physicians they interviewed cited low reimbursement as the primary reason for limiting their acceptance of TRICARE beneficiaries as patients. The providers who were interviewed as part of this study noted that while they could accept more TRICARE beneficiaries as patients, there are services for which the reimbursement was so low that accepting more TRICARE beneficiaries as patients hurt rather than helped them. In addition to these studies, officials from each of the TRICARE Regional Offices and two of the contractors, as well as a national provider organization, told us that reimbursement rates were civilian providers’ primary concern about TRICARE. Concerns about TRICARE’s reimbursement rates—which generally mirror the Medicare program’s physician fee schedule—have increased by the uncertainty surrounding the annual update to these Medicare fees. All of the contractors expressed concerns about the proposed decreases to Medicare rates and how that would affect providers’ acceptance of TRICARE patients. One contractor told us that providers already were expressing concerns about the Medicare rate decreases and that some providers said they would no longer accept TRICARE beneficiaries as patients if the rates were reduced. Furthermore, as of September 2010, this contractor noted that one provider had stopped accepting TRICARE beneficiaries as patients because of concerns about potential Medicare reimbursement reductions. TMA has the authority to adjust TRICARE reimbursement rates under certain conditions to increase beneficiaries’ access to civilian providers, and has done so in some instances. In response to various concerns about providers’ willingness to accept TRICARE patients, TMA contracted with a consulting firm to conduct a number of studies about TRICARE reimbursement rates, and some of these studies have resulted in increases to reimbursement amounts for certain procedures. (See app. II for a summary of the studies.) For example, in response to civilian obstetric providers’ concerns about TRICARE reimbursement rates, TMA conducted an analysis of historical TRICARE claims data and made nationwide changes to its physician payment rates for obstetric care in 2006. These changes included an additional payment for ultrasounds for uncomplicated pregnancies that is likely to result in overall higher payments for civilian physicians who perform one or more ultrasounds during the course of pregnancy. TMA also has the authority to adjust reimbursement rates through the use of waivers in areas where it determines that the rates have had a negative impact on TRICARE beneficiaries’ access to civilian providers. TMA can issue three types of reimbursement waivers, depending on the type of adjustment that is needed: Locality waivers may be used to increase rates for specific medical services in specific areas where access to civilian providers has been severely impaired and are applicable to both network and nonnetwork providers. Network waivers may be used to increase reimbursement rates for network providers up to 15 percent above the TRICARE reimbursement rate in an effort to ensure an adequate number and mix of primary and specialty care network civilian providers in a specific location. TMA can restore TRICARE reimbursement rates in specific localities to the levels that existed before a reduction was made to align TRICARE reimbursement rates with Medicare rates for both network and nonnetwork providers. Waivers can be requested by providers, beneficiaries, contractors, military treatment facilities, or TRICARE Regional Office directors, although all requests must be submitted through the TRICARE Regional Office directors. Individuals may apply for waivers by submitting written requests to the TRICARE Regional Offices. These requests must contain specific justifications to support the claim that access problems are related to low reimbursement rates and must include information such as the number of providers and TRICARE-eligible beneficiaries in a location, the availability of military treatment facility providers, geographic characteristics, and the cost-effectiveness of granting the waiver. Ultimately, the TRICARE Regional Office director reviews and analyzes the requests. If the TRICARE Regional Office director agrees with the request, they make a recommendation to the Director of TMA that the waiver request be approved. Each analysis is tailored to the specific concerns outlined in the waiver requests. Once implemented, waivers remain in effect indefinitely or until TMA officials determine they are no longer needed. As shown in table 3, the total number of waivers has increased from 15 to 24 since we last reported on TMA’s use of waivers in 2006. (See app. III for more details about the waivers.) Additionally, 13 of the 24 waivers are for locations in Alaska. (See app. IV for more information about access-to- care issues in Alaska.) Other than assessing the effectiveness of a specific rate adjustment in Alaska, TMA has not conducted analyses to determine if its rate adjustments or the use of waivers have increased beneficiaries’ access to civilian providers. Nonetheless, officials told us that using the waivers has proved to be successful by maintaining the stability of the provider networks and by increasing the size of the networks in some areas. Another main impediment to TRICARE beneficiaries’ access to civilian providers is a shortage of certain provider specialties, both at the national and local levels. However, TMA is limited in its ability to address provider shortages because this impediment affects the entire health care delivery system and is not specific to the TRICARE program. Although the number of civilian providers accepting TRICARE has increased over the years, access to civilian providers remains a concern due to national and local shortages of certain provider specialties. These shortages limit access for the general population, including all TRICARE beneficiaries regardless of enrollment status. Several organizations have reported on national provider work-force shortages in primary care as well as in a number of specialties. For example, the Association of American Medical Colleges reported national shortages in provider specialties such as anesthesiology, dermatology, and psychiatry. Additionally, the contractors and regional office officials we met with told us that they were particularly concerned about the national shortage of child psychiatrists. In addition to national shortages, TRICARE beneficiaries’ access to civilian providers also may be impeded in certain locations where there are insufficient numbers and types of civilian providers to cover the local demand for health care. According to the contractors, each TRICARE region had areas with civilian provider shortages, for example: In TRICARE’s West region, a Prime Service Area in northern California had provider shortages in 21 different provider specialties, including allergists and obstetricians as well as psychologists and psychiatrists. According to this region’s contractor, either there were no providers located in the area or the providers located in the area were already contracted as TRICARE network providers. In TRICARE’s South region, the contractor identified locations in Texas, Louisiana, and Florida in which there were limited numbers of specialists and mental health providers. For example, according to this contractor, Del Rio, Texas has no providers in several specialties including dermatology, allergy, and psychiatry. Likewise, in TRICARE’s North region, the contractor stated that there are mountainous areas, such as parts of West Virginia, and remote areas, such as western North Carolina, in which there are provider shortages. Consequently, the general population, including TRICARE beneficiaries, has to drive longer distances to obtain certain types of specialty care. TMA has attempted to address civilian provider shortages, but because these shortages are not specific to the TRICARE program, there are limitations in what TMA can do. One step TMA has taken is the adoption of a bonus payment system that mirrors the one used by Medicare for certain provider shortage areas. Under Medicare, providers who provide services to beneficiaries located in Health Professional Shortage Areas— geographic areas that the Department of Health and Human Services has identified as having shortages of primary health, dental, or mental health care providers—receive 10 percent bonus payments. Beginning in June 2003, TMA began offering providers a 10 percent bonus payment for services rendered in these same locations. TMA estimated that from fiscal year 2007 through the third quarter of fiscal year 2010, more than 20,000 individual providers received these payments. Currently, civilian providers must include a specific code on every TRICARE claim they submit to obtain the additional payment. However, TMA officials noted that some providers may not be receiving this bonus because they do not include the specific code on their claims. TMA officials noted the process will become easier once the third generation of managed care support contracts is implemented. Once this occurs, the contractors will rely on the Centers for Medicare & Medicaid Services’ public database of zip codes to determine a provider’s eligibility for these bonus payments instead of requiring the provider to include a code on each claim. TMA officials estimated that this change will result in an additional $150,000 in bonus payments each year for TRICARE claims. Access to mental health care is a concern for all TRICARE beneficiaries, and it has been affected by provider shortages and other issues, including providers’ lack of knowledge about combat related issues, providers’ concerns about reimbursement rates, and providers’ lack of awareness about TRICARE. A 2007 report by the American Psychological Association noted that shortages of mental health providers specifically trained in military issues and the challenge associated with modifying the military culture so that mental health services are less stigmatized are impediments to TRICARE beneficiaries’ access to mental health care. Furthermore, the report discusses that even where mental health providers are available, it can be difficult to find psychiatrists and other mental health providers with specific familiarity of TRICARE beneficiaries’ mental health conditions such as post-traumatic stress disorder and deployment issues. This can be frustrating for TRICARE beneficiaries who seek mental health care only to discover that providers cannot relate to their specific concerns. Over the years, Congress has required DOD to report on TRICARE beneficiaries’ access to mental health care providers. Specifically, the NDAA for Fiscal Year 2008 required DOD to report on the adequacy of access to mental health services under the TRICARE program. In 2009, DOD reported that it believed access to mental health care providers for TRICARE beneficiaries was adequate due to a dramatic increase in both inpatient and outpatient mental health care provided in 2008. DOD also cited increases in the numbers of mental health providers from May 2007 to May 2009 in both the direct care system of military treatment facilities (1,952) and in the civilian provider network (10,220), while acknowledging that there may still be some areas where access to mental health care providers is inadequate. However, in the same report, DOD noted that TRICARE Standard and Extra beneficiaries reported more problems finding civilian mental health care providers than beneficiaries who use other health care coverage, and that psychiatrists have the lowest acceptance rates of new TRICARE Standard and Extra beneficiaries compared with other providers. In its 2009 Access to Mental Health Services report, DOD noted that two reasons most cited by civilian mental health providers, including psychiatrists, for not accepting new TRICARE patients were “not aware of TRICARE” and “reimbursement.” DOD also reported that TMA would increase outreach to mental health providers in selected locations to improve awareness of the program. In addition to the increased outreach, DOD also reported two initiatives designed to enhance beneficiaries’ access to mental health care—the Telemental Health Program and the TRICARE Assistance Program. The Telemental Health Program connects TRICARE beneficiaries in one office to civilian mental health providers in another medical office through an audiovisual link. The TRICARE Assistance Program is a Web-based program that enables certain beneficiaries to contact licensed civilian counselors 24 hours a day for short-term, nonmedical issues. Also, in recognition that mental health is an issue of concern for its beneficiaries, each of the TRICARE Regional Offices and contractors has established staff positions that focus specifically on mental health issues, including access to care. More recently, the NDAA for fiscal year 2010 required DOD to report on the appropriate number of personnel to meet the mental health care needs of servicemembers, retired members, and dependents and to develop and implement a plan to significantly increase the number of DOD military and civilian mental health personnel, among other requirements. In response to this requirement, DOD reported in February 2011 that it has identified criteria for the military services to use in determining the appropriate number of mental health personnel needed to meet the needs of their beneficiaries. However, DOD also noted that the military services are still testing and validating these criteria to determine how effective they would be in gauging adequate mental health staffing numbers. Therefore, although DOD reported increases in the number of mental health providers employed at military treatment facilities or contracted to join TRICARE’s network of providers, it did not specifically estimate the appropriate number of mental health care providers needed. DOD also reported that initiatives are under way to increase the number of mental health providers in military treatment facilities, including increasing the number of Public Health Service providers serving in military treatment facilities as well as recruitment and retention incentives. These initiatives, if successfully implemented, could reduce the demand for civilian mental health providers in those locations. TMA and its contractors have used various feedback mechanisms, such as surveys, to gauge beneficiaries’ access to care under TRICARE Standard and Extra. More recently, TMA officials have taken steps to develop a model to help identify geographic areas where beneficiaries that use TRICARE Standard and Extra may experience access problems. However, because this initiative is still evolving, it is too early to determine its effectiveness. TMA has primarily relied on feedback to gauge beneficiaries’ access to civilian providers under TRICARE Standard and Extra, as historically, access to care has only been routinely monitored for beneficiaries enrolled in TRICARE Prime, the only option with access standards. These feedback mechanisms have included surveys of civilian health care (including mental health care) providers as well as surveys of nonenrolled beneficiaries who are eligible to use the TRICARE Standard and Extra options as well as TRICARE Reserve Select. Additionally, TMA and its contractors use feedback from beneficiaries’ inquiries and complaints to help identify problems with access, among other issues. In fiscal year 2005, TMA implemented its first multiyear survey of civilian providers (network and nonnetwork) as required by the NDAA 2004. TMA’s survey was supposed to assess beneficiaries’ access to civilian providers under the TRICARE Standard and Extra options by determining whether civilian providers would accept these beneficiaries as new patients. In 2006, we reported on TMA’s survey methodology, among other issues, and reported that it was sound and statistically valid. TMA’s results for this first multiyear survey of civilian providers, which was fielded through 2007, showed that about 8 of 10 physicians and behavioral health providers accepted TRICARE beneficiaries as new patients, if they accepted any patients at all. However, while these results appear favorable, as we reported in 2006, there is no benchmark with which to compare them. Subsequently, the NDAA 2008 required TMA to conduct two multiyear surveys—one of civilian providers and one of nonenrolled beneficiaries— to determine the adequacy of access to health care and mental health care for these beneficiaries. In March 2010, we reported that the methodology for both of TMA’s surveys was sound and generally addressed the methodological requirements outlined in the law. TMA has completed the first 2 years (2008 and 2009) of these surveys. TMA and its contractors also use feedback collected from beneficiaries’ inquiries and complaints to identify and gauge potential problem areas, including issues with access to care. However, this type of feedback is not representative because not every beneficiary who has a question or complaint will contact TMA or its contractors. TMA uses its Assistance Reporting Tool to collect and analyze information on the beneficiary inquiries that it receives, including inquiries on access to care from beneficiaries who use TRICARE Standard and Extra. During fiscal years 2008 through 2010, data from the Assistance Reporting Tool showed that only about 5 percent of closed cases on all TRICARE-related beneficiary inquiries and complaints were from TRICARE Standard and Extra beneficiaries. Further, of the total inquiries and complaints received from these beneficiaries, TMA reported that 313 cases were access-to-care related (2 percent). The contractors separately receive feedback from beneficiaries through some or all of the following methods: (1) telephone, (2) e-mail, (3) in- person at a TRICARE Service Center, or (4) in writing. Each contractor collects and reports information on their beneficiary feedback differently. In reviewing contractors’ data on beneficiary inquiries or complaints received, we found: During fiscal year 2009, TMA’s contractor in the North region reported receiving 11,176 (less than 1 percent) access-to-care inquiries out of a total of more than 5 million inquiries. This contractor does not categorize its inquiries by TRICARE option, but does collect and categorize inquiries specific to access-to-care concerns. In fiscal year 2010, the contractor received 3,642 access-to-care inquiries (less than 1 percent) out of a total of more than 5 million inquiries. TMA’s contractor in the South region reported that during calendar year 2009, it received a total of 7,785 complaints. Of these, 175 (2 percent) were submitted by TRICARE Standard and Extra beneficiaries. While access to care did not represent a top reason for their complaints in 2009, this contractor reported that 15 of the complaints received were related to beneficiary appointment and wait times. This contractor also reported that it received a total of 7,927 complaints in calendar year 2010. Of these, 134 (about 2 percent) were submitted by TRICARE Standard and Extra beneficiaries, and only 14 of the 134 complaints were specific to beneficiary appointment and wait times. Finally, data submitted to us by TMA’s contractor in the West region showed that it received a total of 809 grievances from TRICARE beneficiaries between January 2008 and December 2010. Of these, TRICARE Standard and Extra beneficiaries submitted 83 inquiries (about 10 percent), and about 2 percent of the 83 inquiries were specific to provider appointment wait times. TMA has recently initiated steps to establish an approach to routinely monitor beneficiaries’ access to both network and nonnetwork providers under the TRICARE Standard and Extra options. (The new approach will also apply to beneficiaries using the TRICARE Reserve Select option.) In recognition that the military health system had no established measures for determining the adequacy of network and nonnetwork providers for these beneficiaries, in February 2010, TMA’s Office of Policy and Operations directed the TRICARE Regional Offices to develop a model to identify geographic areas where they may experience access problems as well as areas of provider shortages for the general population. The model is intended to help the TRICARE Regional Offices and their contractors identify geographic areas where additional efforts to increase access to civilian providers may be warranted. To implement this approach, TMA recommended that each regional office adapt and standardize the model that had originally been developed by its West regional office in 2008. This model applies a specific provider-to- beneficiary ratio based on the Graduate Medical Education National Advisory Committee’s recommended standards for health care services to different provider specialties to determine whether there are sufficient numbers and types of providers for the nonenrolled beneficiary population in certain locations. To identify locations for analysis, West regional office officials used zip codes to identify locations with populations of 500 or more nonenrolled beneficiaries. According to officials in the West regional office, they then identified the network and nonnetwork providers who practiced and had previously accepted a TRICARE patient in these same locations and applied a specific provider-to-beneficiary ratio against each provider specialty included in the model for the locations assessed. Each regional office has developed a model that generally follows the same methodology and includes similar data as the West regional office’s model, although variations exist. For example, while one regional office includes provider data that represents 15 provider specialties, another regional office includes 40 provider specialties in its model. Officials at one regional office told us they have plans to update their model to reflect changes in the beneficiary population, and an official at another regional office said that staff were already in the process of updating their model, which may include additional provider demographic factors. TMA directed each TRICARE Regional Office to apply the model at least semiannually beginning on May 1, 2010. According to officials in TMA’s South region, they plan to apply the model semi-annually as directed while TMA’s regional offices in the North and West apply the model as needed. More specifically, since TMA’s office in the North region implemented the model, it has assessed 20 locations, and now applies the model as needed in response to specific concerns. Meanwhile, officials from TMA’s office in the West region told us that they initially applied the model to over 50 locations and that they now apply the model as needed, such as in response to a specific inquiry about access to care in a particular location. Officials in the North regional office noted that their model’s data are used in conjunction with other indicators to assess if further analysis of civilian provider availability is needed. Officials in the West region said that they plan to reach out to providers in the community or use the contractor to help recruit additional providers to the TRICARE network if the model identifies an area that is short of their targeted number of providers in a given specialty. Based on our review of each regional office’s initial approach, we found this methodology to be reasonable. However, because the regional models were recently developed, it is too early to determine their effectiveness. And, while the regional offices provided us with examples of their models, they did not provide documentation of how they applied a provider-to- beneficiary ratio as criteria to determine the adequacy of access in these locations or any documentation of their results, although they told us that they did not identify any access problems. TMA’s contractors educate civilian providers about TRICARE program requirements, policies, and procedures. Contractors also conduct outreach to increase providers’ awareness of TRICARE, and TMA’s provider survey results indicate providers are generally aware of the program. However, providers’ awareness of TRICARE does not necessarily signify that they have an accurate understanding of it. Under the second generation of TRICARE contracts, TMA’s contractors are required to conduct activities to help ensure that providers—both network and nonnetwork—are aware of TRICARE program requirements, policies, and procedures in their respective regions. To accomplish this, the contractors are required to have active provider education programs. In addition, each contractor must submit an annual marketing and education plan to TMA’s Communications and Customer Service office that outlines its methods for educating providers based on contractual requirements. All contractors include details in these plans about their efforts to satisfy requirements to distribute regular bulletins and newsletters as well as educate new network providers, such as through orientation sessions or with a Welcome Tool Kit. The contractors’ marketing and education plans also identify provider education efforts that vary across the regions. These efforts vary because contractors have some flexibility in how they achieve outcomes and because the contractors may include additional performance standards in their contracts. Under the second generation of TRICARE contracts, contractors have added performance standards related to provider education. For example, one contractor must visit high-volume network and nonnetwork providers in its region annually, while another contractor must conduct annual seminars for the network and nonnetwork providers in its Prime Service Areas. TMA reported that each contractor had fulfilled its provider education requirements as of December 2010. All of the contractors also make TRICARE education resources available to providers. Many of these resources are available on the contractors’ Web sites and include the TRICARE Provider Handbook as well as quick reference charts that include information on provider resources and TRICARE covered benefits and services, among other topics. One contractor hosts electronic seminars on its Web site that allow providers to learn about the TRICARE program at their convenience. Another contractor has developed a reference chart that details the Prime, Standard, and Extra benefit options and has mailed it to both network and nonnetwork providers in its region who have accepted TRICARE beneficiaries as patients. In addition, all of the contractors have conducted outreach activities to promote or increase providers’ awareness of TRICARE. This has included participating in provider events with local, state, or national groups, including physician associations, medical societies, military treatment facilities, and military associations. Contractors told us that while at these events, they answer providers’ questions about the program, distribute TRICARE materials, and encourage providers to join the regional TRICARE network. All of the contractors have also participated in events specific to behavioral health care. Contractors said that these events allow them the opportunity to discuss behavioral health issues that may particularly affect military servicemembers and their families, such as suicide and post-traumatic stress disorder, with providers. The contractors also use social media to highlight TRICARE information for providers, including resources and program news and changes. For example, one contractor used its Twitter account to provide a link to information on how to become a network or TRICARE-authorized provider in its region. Additionally, two of the TRICARE Regional Offices as well as two contractors have specifically conducted outreach related to either encouraging network and nonnetwork providers to accept TRICARE beneficiaries as patients or thanking them for doing so. For example, in January 2011, one contractor mailed letters to nonnetwork providers, encouraging them to support TRICARE beneficiaries by joining the network. Although TMA’s provider surveys indicate a general awareness of the program, these results may not signify an accurate understanding of TRICARE. Survey results from TMA’s first multiyear survey (2005 through 2007) of civilian providers (network and nonnetwork) indicated that 87 percent of providers on average were aware of TRICARE. TMA’s second multiyear survey of civilian providers (network and nonnetwork), which has completed 2 years (2008 and 2009) of its 4-year cycle, similarly asked whether providers were aware of the TRICARE program. Although the results of this survey are not generalizeable, TMA’s results show that, of those providers who responded, 87 percent on average were aware of the program. Although TMA’s survey results indicate that providers were generally aware of TRICARE, this does not necessarily mean that providers had an accurate understanding of the program’s options and its requirements. For example, representatives of an association representing current and former servicemembers told us that providers do not always understand the differences between the TRICARE Standard and TRICARE Prime options. Similarly, in a November 2008 report, CNA stated that the providers they interviewed were often confused about the differences between TRICARE Standard and TRICARE Prime. One provider, a former president of a local medical society, said many providers are under the misconception that TRICARE Standard is the same as TRICARE Prime and that when providers have had bad experiences with TRICARE Prime, which generally pays network providers less than Medicare, they end up refusing to accept any TRICARE patients because they “don’t want to deal with” a health maintenance organization. This lack of understanding is not always easy to remedy. According to the contractors, because many providers have relatively low volumes of TRICARE patients, it can be challenging to encourage them to take advantage of the available TRICARE education resources or to remain current on updates and changes to the program. In 2009, the average percentage of Prime Service Areas civilian providers’ and non-Prime Service Areas civilian providers’ TRICARE patient population (under any option) was 5.14 percent and 3. 42 percent, respectively. Under the second generation of TRICARE contracts, TMA’s contractors have beneficiary education programs that contain information on all of the TRICARE options; contractors also maintain directories of network providers. Under its third generation of contracts, TMA will also require contractors to include information on nonnetwork providers in their directories. Under the second generation of TRICARE contracts, TMA’s contractors have established beneficiary education programs that contain information on all of the TRICARE options, including Standard and Extra. To meet its beneficiary education requirements, each contractor must submit an annual marketing and education plan to TMA’s Communications and Customer Service office that outlines the contractor’s methods for educating beneficiaries based on its contractual requirements. For example, the contractor may include details in its marketing and education plan about intentions to distribute required beneficiary newsletters and handbooks, which include information on TRICARE’s options and covered services. These plans also specify how the contractors are to provide required weekly one-hour TRICARE briefings to audiences specified by the commanders of their regional military treatment facilities. TMA reported that each of the contractors had fulfilled its beneficiary education requirements as of December 2010. TMA has only one beneficiary education requirement targeted to TRICARE Standard and Extra beneficiaries: contractors must provide these beneficiaries with the annual TRICARE Standard Health Matters newsletter. The 2010 TRICARE Standard Health Matters newsletter included articles on topics such as waiving cost-sharing for certain preventive services under TRICARE Standard and Extra. In 2010, the contractors mailed this newsletter to approximately 1.1 million TRICARE Standard and Extra households and made it available electronically through e-mail and their Web sites. Additionally, for the first time, in summer 2010 TMA developed a second TRICARE Standard Health Matters newsletter for TRICARE Standard and Extra beneficiaries in an electronic format as an additional resource to fill any possible information gaps to beneficiaries. The contractors then e-mailed the electronic newsletter to beneficiaries and posted it to their Web sites. This electronic newsletter included articles on topics such as how beneficiaries may save money by using TRICARE Extra and how they can stay informed about TRICARE. Two of the contractors told us that it is difficult to communicate with TRICARE Standard and Extra beneficiaries because they do not necessarily have ready access to the beneficiaries’ residential or e-mail addresses as these beneficiaries are not required to enroll. This lack of information can make communicating with these beneficiaries challenging, and as a result, TRICARE Standard and Extra beneficiaries may not receive all the available information on their TRICARE benefit. A TMA official noted that TMA is not considering making the additional electronic newsletter a requirement of the third generation of TRICARE contracts, although the contractors may use it to communicate with beneficiaries. All of the contractors also make additional TRICARE education resources available to beneficiaries. Many of these resources are available on their Web sites, and may include the TRICARE Standard Handbook and brochures that explain the different TRICARE options and costs to beneficiaries, among other topics. For example, one contractor makes games available on its Web site, which enables beneficiaries to interactively learn about the TRICARE program. Another contractor posts its own monthly newsletter to its Web site, through which beneficiaries receive information about TRICARE, including its different options, and activities specific to its region. Meanwhile, the third contractor has developed several different fact sheets for beneficiaries that summarize key TRICARE program elements in short, easy-to-read formats. Each of the three contractors also conducts outreach to enhance beneficiaries’ awareness of TRICARE. For example, each of the contractors has attended events hosted by organizations such as the Military Officers Association of America, the Enlisted Association of the National Guard of the United States, the National Military Family Association, the Military Health System, and the Adjutants General Association of the United States. Contractors stated that while at these events they can share TRICARE information with attendees. One contractor also noted that while at these events it addresses beneficiaries’ concerns and directs them to further resources. Contractors also use social media to communicate with beneficiaries and provide information on different TRICARE topics, including (1) benefits, (2) resources, and (3) health campaigns. For instance, one contractor used its Facebook page to clarify whether TRICARE Standard beneficiaries needed primary care managers to coordinate their referrals. Another contractor included information on Facebook about how beneficiaries could access information about their TRICARE benefit. To facilitate beneficiaries’ access to care, TMA requires its contractors to maintain directories of TRICARE-authorized network providers. These directories are to include current information (updated within 30 days) about each network provider, including specialty, address, and telephone number. The contractors are required to make their directories readily accessible to all beneficiaries, and as a result, all of the contractors’ Web sites have online provider directories. Under the second generation of TRICARE contracts, TMA does not require its contractors to provide similar information on nonnetwork providers. However, beneficiaries may contact the TRICARE Regional Offices or the contractors for assistance in locating a network or nonnetwork provider. Two of the contractors said they currently collect information on nonnetwork providers who have accepted TRICARE beneficiaries and can use this information to assist beneficiaries in locating a nonnetwork provider. Beneficiaries can also use TMA’s TRICARE Web site, which refers beneficiaries to the American Medical Association’s provider directory and the Yellow Pages, to find a nonnetwork provider. However, these online resources do not indicate whether a provider is TRICARE-authorized or has accepted TRICARE patients in the past. TMA recognized that its Web site asked beneficiaries to “start from square one” to identify a TRICARE-authorized nonnetwork provider. Although it is not a routine practice for insurance companies to identify nonnetwork providers in their online directories, in February 2010, TMA’s Deputy Chief of TRICARE Policy and Operations recommended (through a memo) that TMA establish an online search tool on its Web site to enable beneficiaries to identify both network and nonnetwork providers no later than May 1, 2010. However, TMA noted that it did not have sufficient data to develop this online search tool. Instead, TMA officials decided that under the third generation of TRICARE contracts, each contractor would be responsible for creating an online provider directory for its region that would include information for beneficiaries on TRICARE-authorized providers, both network and nonnetwork. We received comments on a draft of this report from DOD. (See app. VI.) DOD concurred with our overall findings and provided technical comments, which we incorporated where appropriate. We are sending copies of this report to the Secretary of Defense and appropriate congressional committees. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. Beginning in fiscal year 1991, in an effort to control escalating costs, Congress instructed the Department of Defense (DOD) to gradually lower its reimbursement rates for individual civilian providers to mirror those paid by Medicare. Congress specified that reductions were not to exceed 15 percent in a given year. As of March 2011, there were seven nonmaternity procedures or services for which reimbursement remains higher under TRICARE than Medicare. (See table 4.) Additionally, beginning in 1998, the TRICARE Management Activity (TMA) established a policy that its reimbursement rates for some maternity services and procedures must be set at the higher of the current Medicare fee or the 1997 Medicare fee. As a result, the TRICARE reimbursement rates for 36 maternity services and procedures are higher than Medicare. (See table 5.) TMA contracted with a health-policy research and consulting firm to conduct a number of studies about specific TRICARE reimbursement rates. Some of these studies resulted in changes to the TRICARE reimbursement rates for certain procedures. A brief description of these studies is provided below. Studies of Reimbursement Rates for Specific Maternity/Delivery Procedures, 2006 through 2011 Starting in 2006, TMA’s consultant has conducted annual comparisons of TRICARE’s reimbursement rates for certain maternity/delivery procedures with Medicaid reimbursement rates on a state-by-state basis. Any reimbursement rates that were found to be below the Medicaid level of payment have been increased. For 2006, TMA found that for at least one procedure, the Medicaid rates in 12 states were higher than TRICARE reimbursement rates. For 2007, TMA found that for at least one procedure, the Medicaid rates in 11 states were higher than TRICARE reimbursement rates. For 2008, TMA found that for at least one procedure, the Medicaid rates in 18 states were higher than TRICARE reimbursement rates. For 2009, TMA found that for at least one procedure, the Medicaid rates in 19 states were higher than TRICARE reimbursement rates. For 2010, TMA found that for at least one procedure, the Medicaid rates in the same 19 states were higher than TRICARE reimbursement rates. For 2011, TMA found that 3 of the 19 states from 2010 no longer met the criteria of having at least one maternity/delivery procedure with TRICARE reimbursement rates lower than Medicaid. As a result, for at least one procedure, the Medicaid rates in 16 states were higher than TRICARE reimbursement rates. Comparison of Commercial, Medicaid, and TRICARE Reimbursement Rates for Selected Medical Specialties, April 2009 TMA’s consultant compared specific TRICARE reimbursement rates with reimbursement rates from Medicaid and commercial insurers. For the comparison with Medicaid rates, it identified commonly used procedures for 13 medical specialties and compared TRICARE’s reimbursement rates for these procedures with Medicaid’s fee-for-service rates in 49 states. Overall, the median value of the 2009 Medicaid rates in the 49 states was about 18 percent lower than TRICARE’s reimbursement rates. In 24 states, the TRICARE reimbursement rates exceeded the state Medicaid program rates for the 13 medical specialties reviewed. Conversely, the study found that in 3 states—New Mexico, Arizona, and Wyoming—Medicaid rates, on average, exceeded the TRICARE reimbursement rates for these 13 specialties. For the comparison with commercial rates, TMA’s consultant analyzed reimbursement amounts for 12 medical specialties in 15 geographic market areas and found that commercial rates were higher than TRICARE reimbursement rates for these 12 specialties in almost all of the geographic market areas analyzed. Review of TRICARE Reimbursement Rates for Pediatric Vaccines and Immunizations, January 2009 TMA’s consultant studied TRICARE’s reimbursement rates for selected pediatric immunizations and vaccines to determine whether TRICARE’s reimbursement amounts were below the cost that pediatricians must pay to acquire these vaccines. It analyzed 15 vaccines codes (which often have more than one type of vaccine product associated with them) and found that for each of the vaccine codes, TRICARE’s reimbursement rates exceeded the average acquisition cost paid by pediatric providers for at least one of the vaccine products. Overall, in 2007 TRICARE’s reimbursement rates exceeded the average acquisition cost for the 15 vaccine codes by 30 percent (when weighted by volume). The study also noted that some pediatricians may pay more than the average acquisition price, and some network pediatricians may receive TRICARE reimbursement rates below the average acquisition cost if they have agreed to reimbursement discounts as a condition of belonging to the TRICARE provider network. The study also compared TRICARE’s reimbursement rates to those of Medicare and Medicaid. The study noted that TRICARE uses the same vaccine prices and administration prices as Medicare for vaccine codes for which Medicare sets a price (which is mostly at 106 percent of the average sales price of the vaccine as of 2005— determined by the Centers for Medicare & Medicaid Services). For those vaccines for which Medicare does not have a set price, TRICARE reimbursement rates are set at 95 percent of average wholesale price— which is essentially a “list price” set by the manufacturer. When compared to Medicaid’s rates, TRICARE’s reimbursement rate for the administration of a vaccine or immunization was higher than Medicaid’s in every state in 2008. Analysis of TRICARE Payment Rates for Maternity/Delivery Services, Evaluation and Management Services, and Pediatric Immunizations, March 2006 TMA’s consultant compared TRICARE’s reimbursement rates for 14 specific maternity/delivery services and a pediatrician office visit with Medicaid and commercial payment rates. It found the following: For these specific maternity/delivery services, TRICARE’s reimbursement rates were higher than Medicaid rates in 35 of the 45 states reviewed. Additionally, in 27 of the 35 states, the Medicaid payment rate for deliveries was less than 90 percent of TRICARE’s reimbursement rates. TRICARE’s reimbursement rates for deliveries were less than the median commercial rates in all but one of the 50 markets studied (they were equivalent in the remaining market). Overall, the median commercial rates for deliveries were 24 percent higher than TRICARE’s reimbursement rates in 2005. For pediatric care, TRICARE’s reimbursement rate for a mid-level office visit for an established patient (the most commonly billed code by pediatricians) was higher than the state Medicaid reimbursement rate in 41 of the 45 states in 2005. However, the median commercial reimbursement rates were 10 percent higher than TRICARE’s reimbursement rates in the 50 TRICARE markets examined. TRICARE’s reimbursement for pediatric vaccines and injectable drugs generally appeared to be reasonable when derived from Medicare pricing, based on an analysis of private sector costs, average wholesale prices, and average sales prices for top volume CPT codes. However, TRICARE’s reimbursement rate for the pediatric and adolescent dose of the hepatitis A vaccine was found to be 22 percent lower than estimated private sector costs to obtain the vaccine in 2005. Specifically, the TRICARE reimbursement rate for this vaccine dose was $22.64, while pediatricians were paying between $27.41 and $30.37 for the vaccine. Based on the results of this study, TMA used its general authority to deviate from Medicare rates (upon which TRICARE rates are based), and starting May 1, 2006, TMA instructed the contractors to reimburse pediatric hepatitis A vaccines nationally at a new reimbursement rate of $30.40. TMA has the authority to increase TRICARE reimbursement rates for network and nonnetwork civilian providers to ensure that all beneficiaries, including TRICARE Standard and Extra beneficiaries, have adequate access to civilian providers. TMA’s authorities include: (1) issuing locality waivers that increase rates for specific procedures in specific localities, (2) issuing network waivers that increase some network civilian providers’ reimbursements, and (3) restoring TRICARE reimbursement rates in specific localities to the levels that existed before a reduction was made to align TRICARE reimbursement rates with Medicare rates for both network and nonnetwork providers. Locality waivers may be used to increase rates for specific medical services in specific areas where access to civilian providers has been severely impaired. The resulting rate increase would be applied to both network and nonnetwork civilian providers for the medical services identified in the areas where access is impaired. A total of 17 applications for locality waivers have been submitted to TMA between January 2002 and January 2011. TMA approved 16 of these waivers. (See table 6.) Network waivers are used to increase reimbursement rates for network providers up to 15 percent above the TRICARE reimbursement rate in an effort to ensure an adequate number and mix of primary and specialty care network civilian providers in a specific location. Between January 2002 and January 2011, 13 applications for network waivers have been submitted to TMA. Of these, eight network waivers have been approved by TMA and five have been denied. (See table 7.) TMA can also use its authority to restore TRICARE reimbursement rates in specific localities to the levels that existed before a reduction was made to align TRICARE rates with Medicare rates. On two occasions previously, TMA has used this authority in Alaska to encourage both network and nonnetwork civilian providers to accept TRICARE beneficiaries as patients in an effort to ensure adequate access to care. In 2000, TMA used this waiver authority to uniformly increase reimbursement rates for network and nonnetwork civilian providers in rural Alaska, and in 2002, TMA implemented this same waiver for network and nonnetwork civilian providers in Anchorage. However, in 2007 TMA implemented a demonstration project in Alaska that increased reimbursement rates to match those of the Department of Veterans Affairs. As a result, the waivers implemented under this authority were ended. As of January 2011, TMA did not have any waivers of reimbursement rate reductions in place. Access to health care in Alaska is hindered by unique impediments due to its geographically remote location and small population base, which has resulted in some of the highest costs for providing services in the country. To identify and examine the unique access concerns for Alaska, we reviewed the Interagency Access to Health Care in Alaska Task Force Report to Congress. We also spoke with TMA officials and a representative of the Alaska State Medical Association to obtain their views on the unique access challenges in this state. Federal health programs are the leading payer of health care services to Alaska citizens, constituting approximately 31 percent of total health care expenditures in the state in 2006. In 2010, the Department of Health and Human Services reported that about 14 percent of the population in Alaska had received health care from either DOD’s TRICARE program or from the Veterans Health Administration. According to a 2009 study by the Alaska Center for Rural Health, Alaska has a shortage of providers that has been further impacted by its remoteness, harsh climate, and scarce training resources. Workforce shortages in urban areas range from a complete lack of certain specialists in Fairbanks and other towns, to a relative shortage of primary care providers and many specialists in Anchorage. Moreover, rural areas have far more difficulty attracting qualified candidates than more heavily populated areas, such as Anchorage or Fairbanks. TRICARE officials have identified this overall shortage of providers and providers’ reluctance to accept TRICARE reimbursement rates as the main impediments to TRICARE beneficiaries’ access to civilian providers in Alaska—regardless of which option they use. Alaska is part of TRICARE’s West region, and until recently, Alaska was the only state for which TMA administered and managed TRICARE directly as well as being the only state that did not have Prime Service Areas with networks of civilian providers. In a November 2010 Federal Register notice, DOD announced that the responsibility for administering and managing TRICARE in Alaska would transfer from TMA to the contractor for the West region. Additionally, the notice required the contractor to develop networks of civilian providers in two Prime Service Areas to be established around the military treatment facilities located at Fort Wainwright and Eielson Air Force Base, near Fairbanks, Alaska. This transition of responsibility took place in January 2011, and TMA expects these Prime Service Areas to be developed by July 2011. Additionally, the West region contractor noted that it expects to receive authorization to develop a third Prime Service Area around Elmendorf Air Force Base in Anchorage in late summer 2011. TMA has taken actions to address TRICARE beneficiaries’ access to civilian providers in Alaska by (1) increasing TRICARE’s reimbursement rates through the use of waivers and a demonstration project and (2) participating in a federal task force on the delivery of health care in Alaska. Specifically, in areas where access is impaired, TMA has increased reimbursement rates to encourage civilian providers to accept TRICARE beneficiaries through TMA’s reimbursement waivers. Of the 24 waivers in place as of January 2011, 13 are for locations in Alaska. In addition, TMA began a demonstration project in Alaska in February 2007—originally expected to end in December 2009—that raised reimbursement rates for physicians and other noninstitutional professional providers so that on average, they matched those of the Department of Veterans Affairs. Specifically, TRICARE’s 2007 reimbursement rates were increased approximately 35 percent. In July 2009, TMA conducted a preliminary assessment of the demonstration project and found mixed results. Specifically, TMA’s analysis determined that three of seven measures of access to care indicated that access had improved since the beginning of the project, while the other four measures did not show an improvement in access. Despite this inconclusive assessment, TMA officials in the West region said that the demonstration project and the use of waivers have increased access to care, as the number of providers accepting TRICARE’s reimbursement rates increased. According to these officials, the number of providers that have accepted TRICARE’s reimbursement rate went from under 300 before the demonstration project to almost 800, as of July 2010. Although DOD has recognized that there have been mixed results on the effectiveness of the demonstration project, it extended the demonstration project through December 31, 2012. Finally, in recognition that Alaska has unique health care challenges, Congress established the Interagency Access to Care in Alaska Task Force to review how federal agencies with responsibility for health care services in Alaska are meeting the needs of Alaskans. The Task Force consisted of members from the following: DOD (including TMA), the Department of Veterans Affairs and its Veterans Health Administration, the Department of Health and Human Services and its Centers for Medicare & Medicaid Services and Indian Health Service, and the U.S. Coast Guard. In September 2010, the Task Force issued its report recommending that, among other things, federal agencies providing health care reimbursement in Alaska should support current projects to develop a budget-neutral, uniform provider reimbursement rate for similar services for Medicare, TRICARE, and the Veterans Health Administration. According to TMA officials, TMA is currently reviewing the Task Force’s recommendations to develop options within the framework of current law and regulations. However, the full implementation of the recommendations will be under the direction of the Secretary of Health and Human Services. Under the second generation of contracts, TMA’s contractors have been required to develop and maintain adequate networks of providers, which are to meet the needs of all TRICARE beneficiaries within Prime Service Areas. In doing so, each contractor uses a different methodology for determining the number of providers needed. Contractors are also required to develop their own systems to continuously monitor and evaluate network adequacy and to submit routine reports to TMA on the status of their provider networks in accordance with contract requirements. Specifically, TMA requires its contractors to submit monthly and quarterly reports on network inadequacy and network adequacy, respectively, and to submit corrective action plans for each instance of network inadequacy. The monthly report on network inadequacy must include information on each instance in which a beneficiary enrolled in TRICARE Prime is being referred to: (1) a provider outside of TMA’s time or distance standards or (2) a nonnetwork provider. According to TMA officials, network inadequacies may occur because of provider shortages; in such instances, contractors are not held accountable for not meeting access standards. However, other network inadequacies, particularly referrals to nonnetwork providers, may also be due to other factors, such as network providers not accepting new patients or beneficiaries’ not wanting to wait for available appointments with network providers who are unable to provide an appointment within TMA’s access standards. According to a TMA official, none of the contractors have been cited for not meeting TMA’s time and distance standards or for referrals to nonnetwork providers under the second generation of TRICARE contracts. Contractors’ quarterly reports include: (1) the total number of network providers by specialty, (2) the number of additions and deletions to the network by specialty, and (3) actions to contract with additional providers in areas lacking networks to meet access standards, among other things. In addition to the contact named above, Bonnie Anderson, Assistant Director; Jennie F. Apter; Kaitlin Coffey; Jeff Mayhew; Lisa Motley; C. Jenna Sondhelm; and Suzanne Worth made major contributions to this report. Defense Health Care: 2008 Access to Care Surveys Indicate Some Problems, but Beneficiary Satisfaction Is Similar to Other Health Plans. GAO-10-402. Washington, D.C.: March 31, 2010. TRICARE: Changes to Access Policies and Payment Rates for Services Provided by Civilian Obstetricians. GAO-07-941R. Washington, D.C.: July 31, 2007. Defense Health Care: Access to Care for Beneficiaries Who Have Not Enrolled in TRICARE’s Managed Care Option. GAO-07-48. Washington, D.C.: December 22, 2006. Defense Health Care: Oversight of the TRICARE Civilian Provider Network Should Be Improved. GAO-03-928. Washington, D.C.: July 31, 2003. Defense Health Care: Oversight of the Adequacy of TRICARE’s Civilian Provider Network Has Weaknesses. GAO-03-592T. Washington, D.C.: March 27, 2003. Defense Health Care: Across-the-Board Physician Rate Increase Would be Costly and Unnecessary. GAO-01-620. Washington, D.C.: May 24, 2001. | The Department of Defense (DOD) provides health care through its TRICARE program, which is managed by the TRICARE Management Activity (TMA). TRICARE offers three basic options. Beneficiaries who choose TRICARE Prime, an option that uses civilian provider networks, must enroll. TRICARE beneficiaries who do not enroll in this option may obtain care from nonnetwork providers under TRICARE Standard or from network providers under TRICARE Extra. The National Defense Authorization Act for Fiscal Year 2008 directed GAO to evaluate various aspects of beneficiaries' access to care under the TRICARE Standard and Extra options. This report examines (1) impediments to TRICARE Standard and Extra beneficiaries' access to civilian health care and mental health care providers and TMA's actions to address the impediments; (2) TMA's efforts to monitor access to civilian providers for TRICARE Standard and Extra beneficiaries; (3) how TMA informs network and nonnetwork civilian providers about TRICARE Standard and Extra; and (4) how TMA informs TRICARE Standard and Extra beneficiaries about their options. To address these objectives, GAO reviewed and analyzed TMA and TRICARE contractor data and documents. GAO also interviewed TMA officials, including those in its regional offices, as well as its contractors. Reimbursement rates and provider shortages have been cited as the main impediments that hinder TRICARE Standard and Extra beneficiaries' access to civilian health care and mental health care providers. Providers' concern about TRICARE's reimbursement rates--which are generally set at Medicare rates--has been a long-standing issue and has more recently been cited as the primary reason civilian providers will not accept TRICARE Standard and Extra beneficiaries as patients, according to TMA's surveys of civilian providers. TMA can increase reimbursement rates in certain instances, such as when it determines that access to care is being affected by the level of reimbursement. Shortages of certain provider specialties, such as mental health care providers, at the national and local levels may also impede access, but these shortages are not specific to the TRICARE program and also affect the general population. As a result, there are limitations as to what TMA can do to address them. TMA has primarily used feedback mechanisms, including surveys of beneficiaries and civilian providers, to gauge TRICARE Standard and Extra beneficiaries' access to civilian providers. More recently, in February 2010, in recognition that TRICARE has had no established measures for monitoring the availability of civilian network and nonnetwork providers for these beneficiaries, TMA directed the TRICARE Regional Offices to develop a model to help identify geographic areas where they may experience access problems. GAO's review of the initial models found their methodology to be reasonable. However, because the regional models were recently developed, it is too early to determine their effectiveness. TMA's contractors educate civilian providers about TRICARE program requirements, policies, and procedures. Contractors also conduct outreach to increase providers' awareness of the program, and while TMA's provider survey results indicate that civilian providers are generally aware of the program, this does not necessarily signify that providers have an accurate understanding of the TRICARE program and its options. Similarly, TMA's contractors educate beneficiaries on all of the TRICARE options and maintain directories of network providers to facilitate beneficiaries' access to care. When the new TRICARE contracts are implemented, TMA will also require its contractors to include information on nonnetwork providers in their provider directories. In commenting on a draft of this report, DOD concurred with GAO's overall findings. |
A working capital fund relies on sales revenue rather than direct appropriations to finance its continuing operations. A working capital fund is intended to (1) generate sufficient resources to cover the full costs of its operations and (2) operate on a break-even basis over time—that is, neither make a gain nor incur a loss. Customers use appropriated funds, primarily operations and maintenance appropriations, to finance orders placed with the working capital fund. According to the Army’s fiscal year 2008/2009 budget, the Army Working Capital Fund will earn about $15.3 billion in revenue during fiscal year 2008. The Army Working Capital Fund includes an industrial operations activity group that provides the Army with the in-house industrial capability to conduct depot-level maintenance, repair, and upgrade; produce quality munitions and large- caliber weapons; and store, maintain, and demilitarize material for all branches of DOD. For example, the Anniston Army Depot (Anniston) repairs tanks for the Marine Corps. The industrial operations activity group consists of 13 activities—five maintenance depots, three arsenals, two munitions production facilities, and three storage sites. The preponderance of the industrial operations workload and budget estimates relate to the depot- level maintenance work. Information on the five Army depots follows. Anniston performs maintenance on both heavy- and light-tracked combat vehicles and their components, such as the M1 Abrams tank. Corpus Christi Army Depot (Corpus Christi) overhauls, repairs, modifies, tests, and modernizes helicopters, engines, and components for all services and foreign military customers. Letterkenny Army Depot (Letterkenny) has tactical missile repair capabilities supporting a variety of DOD missile systems including the Patriot and its ground support and radar equipment. In response to the Global War on Terrorism, Letterkenny is rebuilding the High Mobility Multi-Purpose Wheeled Vehicles (HMMWV) that are returning from theater and is rebuilding them to a configuration that will support add- on armor. Red River Army Depot (Red River) performs maintenance, certification, and related support services on ground combat systems, air defense systems, and tactical wheeled vehicles. Systems supported include the Bradley Infantry Fighting Vehicle, Multiple Launch Rocket System, Small Emplacement Excavator, 5-ton dump truck, and HMMWVs. Tobyhanna uses advanced technologies to ensure the readiness of U.S. armed forces and is a full-service repair, overhaul, and fabrication facility for communications-electronics systems, and equipment and select missile guidance systems. Carryover is the reported dollar value of work that has been ordered and funded (obligated) by customers but not completed by working capital fund activities at the end of the fiscal year. Carryover consists of both the unfinished portion of work started but not completed, as well as requested work that has not yet begun. Some carryover is necessary at the end of the fiscal year if working capital funds are to operate efficiently and effectively. For example, if customers do not receive new appropriations at the beginning of the fiscal year, carryover is necessary to ensure that the working capital fund activities have enough work to ensure a smooth transition between fiscal years. Too little carryover could result in some personnel not having work to perform at the beginning of the fiscal year. On the other hand, too much carryover could result in an activity group receiving funds from customers in one fiscal year but not performing the work until well into the next fiscal year or subsequent years. By optimizing the amount of carryover, DOD can use its resources in the most effective manner and minimize the “banking” of funds for work and programs to be performed in subsequent years. In 1996, DOD established a 3-month carryover standard for working capital fund activities. In May 2001, we reported that DOD did not have a basis for its carryover standard and recommended that DOD determine the appropriate carryover standard for depot maintenance, ordnance, and research and development activity groups. DOD included its revised carryover policy in DOD Financial Management Regulation 7000.14-R, volume 2B, chapter 9. Under the new policy, the allowable amount of carryover is based on the outlay rate of the customers’ appropriations financing the work. According to the DOD regulation, this carryover metric allows for an analytical-based approach that holds working capital fund activities to the same standard as general fund execution and allows for meaningful budget execution analysis. In accordance with DOD policy, (1) nonfederal orders, (2) non-DOD orders, (3) foreign military sales, and (4) work related to base realignment and closure are excluded from the carryover calculation. Further, the Army has requested and OUSD (Comptroller) has approved an exemption of crash and battle damaged aircraft from the carryover ceilings during wartime operations for the past few years. This has resulted in tens of millions of dollars of orders and carryover being excluded from the carryover calculation. The reported actual carryover (net of exclusions) is then compared to the amount of allowable carryover using the above- described outlay rate method to determine if the reported actual amount is over or under the allowable carryover amount. In 2005, we reported that the Army depot maintenance activities consistently exceeded the carryover ceiling from fiscal years 1996 through 2003. Tables 1 and 2 show that the Army depot maintenance activities’ actual reported carryover (1) consistently exceeded DOD’s 3-month carryover standard from fiscal year 1996 through fiscal year 2001 and (2) continued to exceed the allowable amount of carryover as calculated under DOD’s revised carryover policy for fiscal years 2002 and 2003. Decision makers, including OUSD (Comptroller) and congressional defense committees, use reported carryover information to make decisions concerning whether working capital fund activities, such as the Army depots, have too much carryover. If the Army depots have too much carryover, the decision makers may reduce the customers’ budgets and use these resources for other purposes. For example, Congress has reduced the services’ budgets because of excessive carryover, including a reduction in the Army’s fiscal years 2003 and 2006 operation and maintenance appropriations by $48 million and $94.7 million, respectively. The Army depots’ total carryover significantly increased from $1.1 billion in fiscal year 2004 to $2.7 billion in fiscal year 2007—a $1.6 billion increase. In order to reduce the fiscal year 2007 carryover, the Army developed a plan to perform $5.5 billion of work in fiscal year 2008—$1.4 billion more than the Army depots performed in fiscal year 2007. Our analysis of the plan and first quarter fiscal years 2007 and 2008 execution data show that the depots performed significantly more work than they performed during the same period in the prior year but the depots missed their goal by $173 million at the end of December 2007. Further, while the Army depot maintenance carryover amount had more than doubled over the past 4 years, this increase has not been specifically identified in the Army Working Capital Fund budgets to Congress because the Army consolidated the depot maintenance and ordnance activity groups under a single activity group called the Industrial Operations activity group in fiscal year 2005. From fiscal years 2004 through 2007, the Army depots’ total carryover significantly increased from $1.1 billion to $2.7 billion. The dollar amount of new orders received in fiscal years 2006 and 2007 (about $9.5 billion) by the depots significantly exceeded the dollar amount of work performed (about $7.8 billion) by the depots during those same years. The depots carried over about 7.6 months of work into fiscal year 2008. Figure 1 illustrates how changes in fiscal years 2004 through 2007 new orders and work performed (revenue) have affected depot carryover. As shown in figure 1, the new orders and work performed (revenue) increased from fiscal year 2004 through fiscal year 2007. However, the dollar amount of new orders increased at a greater pace than the dollar amount of work performed (revenue). New orders increased from about $2.6 billion to about $5.2 billion (about 100 percent increase) while the amount of revenue earned increased from $2.7 billion to about $4.2 billion (56 percent increase). In the first quarter of fiscal year 2008, the Army developed a plan to reduce the level of carryover at the Army depots. According to the plan, the Army depots would perform $5.5 billion of work in fiscal year 2008—$1.4 billion more than the Army depots performed in fiscal year 2007. In order to meet the revenue increases, the depots plan to take a number of actions, including hiring additional maintenance personnel and requiring maintenance personnel to work overtime. Our analysis of the five Army depots’ revenue for the first quarter of fiscal years 2007 and 2008 showed that the depots increased their revenue by about $293 million in the first quarter of fiscal year 2008 (about $1.1 billion) compared to the same quarter the prior year ($817 million). Even though the depots increased their revenue, the depots missed their fiscal year 2008 first quarter revenue targets by about $173 million ($1.282 billion target less $1.109 billion actual revenue). By missing the first quarter target, the Army is at risk of not meeting the carryover reduction plan goals for fiscal year 2008. In January and February 2008, we met with officials at the five Army depots to determine why some of the depots missed their revenue targets for the first quarter of fiscal year 2008. For the depots that missed their revenue targets, the officials stated that (1) the depots performed a different mix of workload than originally planned, generating less revenue; (2) unserviceable assets did not arrive as planned and the depots could not perform the planned workload; and (3) spare parts were not available to perform the planned workload. Even though several of the depots missed their first quarter revenue targets, officials at all but one of the depots— Anniston—stated that they expected to meet their end of fiscal year 2008 revenue targets. Anniston officials stated that they believed they would miss their revenue target by about $200 million, but they were attempting to identify additional work they could perform to increase revenue in fiscal year 2008. While officials at four of the five depots believed that they would meet their revenue targets and thus reduce carryover by the end of fiscal year 2008, the reduction of the carryover amount will largely depend on the amount of new orders accepted by the depots in fiscal year 2008 and the ability of the depots to perform their fiscal year 2008 workloads as planned. Although the Army depot maintenance carryover amount had more than doubled over the past 4 years, this increase in Army depot maintenance activities’ carryover amount has not been specifically identified in the Army’s Working Capital Fund budgets to Congress because the Army consolidated the depot maintenance and ordnance activity groups under a single activity group called the Industrial Operations activity group in fiscal year 2005. Prior to the consolidation, the Army Working Capital Fund budgets provided carryover information, such as the dollar amount of carryover and the carryover ceiling for the depot maintenance activities. Without detailed data on the Army depot maintenance activity groups’ carryover, Congress cannot make informed decisions about the appropriate size of the Army depot maintenance budget and whether the depots are making significant progress in reducing their carryover amounts. In light of the significant increase in new orders and carryover at the Army depots because of ongoing wartime operations, it is even more important for the Army to report carryover information to Congress separately to provide visibility of the Army depot maintenance activities. Reported Army depot maintenance activities’ carryover was reduced by tens of million of dollars by (1) funds being deobligated at the end of fiscal year 2006 and then reobligated in the beginning of fiscal year 2007 and (2) amounts that were exempted from carryover calculations in fiscal year 2007. The deobligations of funds at the end of fiscal year 2006 and the fiscal year 2007 exemptions affected the amount of reported carryover as well as the amount of carryover that was over/under the carryover ceiling for fiscal years 2006 and 2007. In fiscal year 2006, the Army depot maintenance activities reported that carryover work and related funding was under the ceiling by $67 million. In order to reduce the Army’s Industrial Operations fiscal year 2006 carryover, the Army Materiel Command directed Army activities to deobligate selected procurement-funded orders totaling $83 million. Specifically, Tobyhanna was directed to deobligate $30 million, and an Army ordnance activity (Pine Bluff Arsenal) was directed to deobligate $53 million by September 29, 2006, for work that they still planned to perform. The guidance stated that the orders would be reobligated on October 2, 2006. Further, the guidance stated that (1) the Industrial Operations carryover estimate increased by $388 million since the summer budget submission to OUSD (Comptroller) and (2) the Army did not want to exceed its carryover ceiling and give OUSD (Comptroller) “an excuse to doubt our ability to execute the fiscal year 2007 or fiscal year 2008 supplemental funding.” Our review of Tobyhanna records showed that customers deobligated $30 million against six orders on September 28 and September 29, 2006. The funds were then reobligated within the next 2 weeks. The action directed by the Army Materiel Command artificially lowered the reported carryover balances for Army’s Industrial Operations and more specifically the Army depot maintenance activities in fiscal year 2006. As discussed previously, congressional decision makers receive an aggregated report on carryover balances that covers the Army’s Industrial Operations activities. We have previously reported on a similar year-end deobligation problem related to Navy research and development activities. In response to our recommendation on this issue, OUSD (Comptroller) issued guidance on July 28, 2003, to the military services and DOD components prohibiting the manipulation of customer order balances in an attempt to reduce reported carryover. The guidance directed components to conduct internal reviews of accounting procedures currently in use, to include year-end adjustments, to ensure that this type of manipulation of carryover levels is not occurring. For fiscal year 2007, OUSD (Comptroller) approved about $299.7 million in additional exemptions from the carryover calculations that were not excluded in previous years. Without the exemptions, the depots would have exceeded the carryover ceiling by $251.2 million. However, with the exemptions, the depots exceeded the carryover ceiling by $96.8 million. These exemptions were for (1) a public-private partnership involving Anniston ($194.2 million); (2) fourth quarter orders received by Anniston, Corpus Christi, and Tobyhanna from other services ($77.4 million); and (3) long lead time material at Anniston ($28.1 million). In discussing the exemptions with OUSD (Comptroller) officials, the officials stated that they approved all carryover exemptions requested by the depots for orders received from other services in the fourth quarter of fiscal year 2007 and the public-private partnership arrangement involving Anniston. The officials stated that they denied some of the depots’ carryover exemption requests for long lead time material. Further, the officials stated that the exemption requests that were granted for fiscal year 2007 carryover and their associated new orders resulted from the large increase in supplemental funding provided to the depots in support of ongoing wartime operations. The officials stated that the Army would have to request the exemptions next year if similar circumstances exist. Based on our review of the Army’s exemption request and our findings in prior reports, as well as discussions with OUSD (Comptroller) and Army officials, we found that these exemptions do not provide the right incentives to the depots, customers, Defense Logistics Agency (DLA), and Army Supply to correct long-standing problems with receiving orders from other services late in the fiscal year and program delays caused by long lead time material. Because these issues are exempted, they are not subject to the level of scrutiny and possible corrective actions that would be provided if these problem areas were reflected in higher reported carryover balances. We reported in May 2001 and again in June 2005 that Army depots exceeded their carryover ceiling because some depots received and accepted work late in the fiscal year, and some depots could not obtain the material needed in a timely manner so that less work was performed than planned. As discussed in the next section, our current review found similar problems with late year orders and the lack of spare parts available for repair. Our analysis of depot reports and discussions with Army officials identified four primary reasons for the growth in carryover. First, during fiscal years 2006 and 2007, the Army depot maintenance budget significantly underestimated the amount of new orders actually received from customers. While the depots performed more work than budgeted, they could not keep pace with the increases in new orders. Second, we found that the depots accepted orders late in the fiscal year that reasonably could not be completed, and in some cases could not even be started, prior to the end of the fiscal year. Third, we found that parts shortages prevented work from being performed. Fourth, unserviceable assets (assets that need to be repaired) scheduled for repair did not arrive at the depots as planned. While some of these reasons are under the control of other DOD activities, such as customers not sending assets needing repair to the depots as planned, other reasons are within the depots’ control. For fiscal years 2006 and 2007, the Army depot maintenance budget significantly underestimated the amount of new orders actually received from customers by about $1.7 billion and $1.5 billion, respectively. For example, while the budget shows that the depots expected to receive about $3.7 billion in new orders and perform about $3.8 billion of work (revenue) in fiscal year 2007, the depots actually received about $5.2 billion in new orders and performed $4.2 billion of work. To perform more work during fiscal year 2007, the depots increased the number of employees and the direct labor hours performed by 630 employees and about 2.8 million direct labor hours over their fiscal year 2006 totals. However, while the work performed by the depots (revenue) increased from fiscal year 2006 to fiscal year 2007, it did not increase at the pace of the orders received from customers, resulting in the large growth of carryover. Our analysis of the Army budget guidance for fiscal year 2006 showed that the Army assumed that the fiscal year 2006 new orders would amount to approximately 50 percent of the fiscal year 2005 operation and maintenance budget, Army supplemental workload. For fiscal year 2007, the Army assumed that the fiscal year 2007 orders would be approximately 25 percent less than the fiscal year 2006 program. These budget assumptions resulted in the reported actual orders significantly exceeding budgeted orders for fiscal years 2006 and 2007. For example, at Anniston, our analysis showed that the depot originally budgeted to receive about $1.1 billion of new orders for fiscal year 2007. During the midyear review in March 2007, Anniston revised its estimate to about $1.4 billion. However, the depot actually received about $1.5 billion of new orders for fiscal year 2007—a difference of about $400 million or 36 percent from the original amount budgeted. In discussing this matter with Army headquarters officials, they told us that budgeting for new orders was affected by the continuing Global War on Terrorism and the anticipated supplemental appropriations to finance the war. Army headquarters officials said that the Army underestimated the amount of new orders received by the depots because (1) the Army did not have any historical information on the amount of funds the depots would receive in the supplemental appropriations for depot maintenance work and (2) of the uncertainty related to the amount of funds the Army would receive in the supplemental appropriations for this depot maintenance work. Without reliable budget estimates, the Army depots cannot make the necessary adjustments to their manpower and material to ensure that the depots can meet the Army’s maintenance requirements. In June 2006, we reported that carryover is greatly affected by orders accepted late in the fiscal year that generally cannot be completed, and in some cases cannot even be started, prior to the end of the fiscal year. As a result, almost all orders accepted late in the fiscal year increase the amount of carryover. DOD Financial Management Regulation 7000.14-R, volume 11A, chapters 2 and 3, prescribes regulations governing the use of orders placed with working capital fund activities. The DOD regulation identifies a number of requirements before a working capital fund activity accepts an order. For example, work to be performed under the order shall be expected to begin within a reasonable amount of time after the order is accepted by the performing DOD activity. As a minimum requirement, it should be documented that when an order is accepted, the work is expected to (1) begin without delay (usually within 90 days) and (2) be completed within the normal production period for the specific work ordered. Our analysis of fiscal years 2006 and 2007 orders showed that orders received in the fourth quarter continued to be a problem. For example, two of the five depots accepted more than 20 percent of their new fiscal year 2006 orders in the last 3 months of the fiscal year. The following examples illustrate orders that were accepted by Army depot maintenance activities late in fiscal year 2006. In September 2006, Tobyhanna accepted an order from Tinker Air Force Base totaling approximately $3.3 million financed with operation and maintenance funds that would expire on September 30, 2006. The order was for the overhaul of an Air Force landing control radar that was located at Ramstein Air Base, Germany. According to an Air Force official and documentation, the Air Force identified the maintenance requirement in March 2006; however, funds were not made available until the end of fiscal year 2006, when additional funds were identified from other programs. As a result, the depot carried over the entire $3.3 million into fiscal year 2007. In addition, depot officials stated that the depot experienced several delays in performing the work on the radar because of the initial unavailability of the asset (2-month delay), reconfiguration and resheltering of the asset, and the unavailability of long lead time parts. Because of these problems, the depot carried over approximately $1.8 million from fiscal year 2007 into fiscal year 2008 and expects to complete the overhaul of the landing control radar on January 30, 2009. In August 2006, Letterkenny accepted an order totaling about $8.4 million that was financed with operation and maintenance funds for the repair of 15 Patriot launching stations. According to the production controller, the initial inspection and teardown work on the Patriot launching stations began when the order was accepted. Since repair work on the Patriot launching stations did not begin until August 2006, about $7.1 million of funded workload was carried over into fiscal year 2007. According to the production controller, if the repair work for the Patriot launching stations was funded earlier in the fiscal year, then the carryover amount would have been a lot lower. All of the repair work for the 15 Patriot launching stations was completed by February 2007. Our analysis of depot data and interviews with depot officials found that the depots experienced shortages of parts needed to perform their repair work in fiscal years 2006 and 2007. Our analysis of data in the critical maintenance repair parts reactive system at four depots showed that in 733 and 605 instances, repair parts shortages resulted in work stoppage in fiscal years 2006 and 2007, respectively. DLA and to a lesser extent Army Supply were the sources of supply for most of the repair parts. DLA officials told us that a major difficulty DLA faces as a supplier is forecasting the amount of repair parts needed when the depots’ types and numbers of repairs keep changing. Having a firm requirement (quantity of items to be repaired) early in the process is critical if DLA is to provide the spare parts to the depots when they need them. However, this has not always been the case. For example, as discussed later in this report, in November 2006, Red River accepted an order to overhaul 200 HMMWVs. Over the next 4 months, the order was amended first to decrease the quantity to 106 and then increase the quantity to 344. According to DLA officials, changing requirements, similar to this example, make it extremely difficult to forecast the spare parts needed for repairs. If DLA waits to buy the parts until the depot has a firm requirement, the parts might not be available when the depot needs them. On the other hand, if DLA buys the parts before the requirement is firm, DLA is at risk for excess inventory of parts when requirements for parts are significantly reduced. In order to perform the required repair work and help minimize the impact of parts shortages on depot operations, the depots have taken a number of actions to obtain parts when they were not available, including using parts from other assets, commonly referred to as robbing parts; fabricating the parts; and obtaining parts through the use of their local procurement authority, including the government purchase card. The following are examples of actions taken by the depots. In October 2006, Anniston accepted a $5.6 million order financed with fiscal year 2007 operation and maintenance appropriated funds to overhaul 1,200 M2 machine guns. The work was originally scheduled to begin in March 2007. Because of the lack of parts, the work did not begin until July 2007 which resulted in more carryover than originally planned. About $5.4 million of the $5.6 million carried over from fiscal year 2007 into fiscal year 2008. Because of the Global War on Terrorism and the surge in production of the M2s, Anniston had problems with obtaining parts to overhaul the machine guns since 2004. Because the depot could not get the needed parts from DLA or Army Supply, it used parts from other M2 machine guns. Some of these parts included the barrels, buffer body assemble, bolt, barrel extensions, breech locks, and receivers. Since the depot used parts from these 1,200 machine guns to repair machine guns in previous years, these 1,200 machine guns were missing parts. By the time the depot overhauled the 1,200 M2 machine guns, about half of the M2s had been totally stripped of their parts. An Army official stated that the machine guns going through overhaul were the “worst of the worst.” To perform the work, the depot had to buy new parts and have the Picatinny Arsenal fabricate barrel extensions in order to obtain the parts needed to complete the overhaul. This extra work increased the costs to about $10.4 million and the work was completed in December 2007. In November 2005, Tobyhanna accepted an order totaling about $18.4 million to produce 3,954 light sets for the Army Communications- Electronics Life Cycle Management Command. The light sets are used to illuminate temporary facilities, such as tents and buildings. In order to produce the 3,954 light sets, the depot had to assemble almost 1 million new parts. According to depot officials and documentation, the order was originally expected to be completed by September 30, 2006, but the completion date was delayed by approximately 13 months because of problems obtaining parts from DLA. In order to meet the parts requirement, DLA ordered the parts from its suppliers with approximately 2 years delivery. Since the expected delivery dates did not meet the customer’s delivery requirements, the depot canceled its order with DLA and ordered the parts directly from vendors to meet its production schedule. However, the vendor that produces approximately 80 percent of the parts could only provide enough parts for the production of 300 light sets a month. As a result, the depot carried over approximately $16 million from fiscal year 2006 into fiscal year 2007 and $1 million from fiscal year 2007 into fiscal year 2008. In October 2006, the depot accepted another order from the Army Communications-Electronics Life Cycle Management Command totaling about $5.9 million for an additional 1,069 light sets. Because of the unavailability of a sufficient quantity of parts from the vendor to satisfy the fiscal years 2006 and 2007 orders, the depot could not begin work on the October 2006 order until August 2007—approximately 10 months after the order was accepted. As noted previously, the DOD Financial Management Regulation includes requirements for accepting an order, including limiting acceptances to those orders that are expected to begin without delay (usually within 90 days). The depot carried over $5.4 million from fiscal year 2007 into fiscal year 2008. As of February 2008, the depot expected to complete the order by March 2008. In November 2006, Red River accepted an order totaling approximately $24.8 million to overhaul 200 M1114 up armor HMMWVs from the Army TACOM Life Cycle Management Command. The order was financed with fiscal year 2007 operation and maintenance appropriated funds and was modified twice. In January 2007, the order was reduced to overhaul 106 HMMWVs for about $13.1 million. Two months later in March 2007, the order was increased to 344 HMMWVs for about $56.1 million. In performing this work, the depot encountered two problems. First, the HMMWVs were not always available, resulting in changes to scheduling the performance of work. Second, the depot encountered problems in obtaining the material it needed to perform the repairs. For example, in May 2007, there was a shortage or potential parts shortage of 45 different parts to perform this work. To obtain the parts needed to perform the work, depot officials stated that they used parts from other vehicles at the depot or purchased parts via local procurement, including using government purchase cards. In August 2007, there was a shortage or potential parts shortage of 30 different parts. Since most of the work was not completed in fiscal year 2007, about $37.5 million carried over into fiscal year 2008. As of December 2007, documents showed that the depot anticipated completing work on this order in April 2008. In discussing the M1114 up armor HMMWV work with Red River officials, they told us that the problems encountered in performing the fiscal year 2007 work also occurred in the previous fiscal year. First, the quantity to be repaired kept changing. Specifically, in January 2006 they accepted an order to repair 37 HMMWVs. In March 2006, the order was amended to 108 HMMWVs. Then in July 2006 the order was amended to repair 58 HMMWVs. Finally, in August, 2006, the order was amended back to 108 HMMWVs. Second, the depot also encountered problems on obtaining parts to perform the work. According to depot officials, because the last amendment increasing the order to 108 HMMWVs occurred in August 2006 and the HMMWVs to be repaired were in poor condition, the carryover amount was high. The amount of work that carried over from fiscal year 2006 into fiscal year 2007 was $8.6 million of this $18.9 million order. Army and DLA officials stated that in order to improve parts availability and reduce parts shortages, the Army and DLA are taking a number of actions. First, the Army depots and DLA are using a new tool that allows them to forecast spare parts requirements earlier in the process. Thus, they can better predict spare parts shortages and resolve them before the spare parts problems result in costly work-arounds or work stoppages at the depots. Second, DLA is establishing a greater presence at the depots to provide the depots and DLA greater visibility of spare parts requirements and to improve overall support to the depots. For example, DLA has added or is in the process of adding between two to eight personnel at each of the five depots to improve the forecasting of spare parts requirements and to expedite procurement of DLA- managed parts needed to meet the depots’ immediate production requirements. Finally, DLA is working with its suppliers to identify alternative procurement sources and expedite parts delivery to avoid parts shortages at the depots. While these are good first steps to help resolve the spare parts problems, it is too early to determine if they will succeed. Furthermore, the Army does not have quantifiable measures, such as comparing information in the critical maintenance repair parts reactive system from one period to another period, to determine the effectiveness of its actions to reduce the depots’ critical spare parts problems. One of the reasons cited in depot reports and by depot officials for carryover is that unserviceable assets (assets that need to be repaired) scheduled for repair did not arrive at the depots as planned. Our review of 53 depot reports issued in fiscal years 2006 and 2007 found that over two- thirds of the reports from the five depots cited deficiencies related to the lack of unserviceable assets for repair. For example, a Letterkenny report cited 115 fiscal year 2007 projects that were either delayed or canceled because of the lack of unserviceable assets for repair. In some cases, the lack of unserviceable assets either stopped or delayed depot production operations, resulting in increased carryover. The scope of our work did not include researching the customers’ reasons for not sending the assets for repair as planned. However, Army officials informed us that in some cases the assets remained in-theatre (for example, in Iraq) for longer periods than planned. While the depots have taken a number of actions to minimize production delays and carryover associated with the lack of unserviceable assets, the depots continue to report a lack of unserviceable assets. The following examples illustrate the impact on carryover when work was not performed because assets did not arrive at the depots as scheduled. In November 2005, Anniston accepted an order to overhaul 7 M1 tanks totaling about $6.4 million, which was financed with fiscal year 2006 Marine Corps Operations and Maintenance appropriated funds. The order was amended 9 times, increasing the quantity to 88 M1 tanks and increasing the amount of the order to about $86.6 million. During fiscal year 2006, the depot ordered about $8.8 million of material for this order with the first order for material occurring in April 2006. However, the first tank was not available for induction into the depot until December 2006, or 3 months into fiscal year 2007. Our analysis of production documents on this order showed that the production schedule for performing the tank work continuously changed. Specifically, during fiscal years 2006 and 2007, depot production documents show that the production schedule changed 10 times because of customer requirements changing or the tanks not arriving at the depot as scheduled. Because the tanks were not available until fiscal year 2007, about $77.8 million of work (the amount of the order— $86.6 million—less the amount of material—$8.8 million) was carried over into fiscal year 2007. Although all the work was originally scheduled to be completed during fiscal year 2007, 17 tanks were not available for the depot to begin work on until fiscal year 2008, which resulted in almost $6.9 million being carried over into fiscal year 2008. The problem of production schedules changing that Anniston experienced in performing the tank work on the fiscal year 2006 order continued on a fiscal year 2007 order. In November 2006, the depot accepted another order totaling about $39 million, which was financed with fiscal year 2007 Marine Corps Operations and Maintenance appropriated funds to overhaul 36 M1 tanks. The order was amended five times during fiscal year 2007 increasing the quantity to 75 M1 tanks and increasing the amount of the order to about $81.4 million. The amendments increased the quantities of tanks to be overhauled from 36 to 75 and amount of funding from $39 million to $81.4 million. To perform work on this order, during fiscal year 2007, the depot ordered material with the first order for material occurring in January 2007. However, the first tank was not available to be inducted into the depot until September 2007—the last month of the fiscal year. Our analysis of production documents on this tank order showed that the production schedule changed five times because of customer requirements changing or the tanks not arriving at the depot as scheduled. Because work on the tanks did not begin until the end of fiscal year 2007, about $71.3 million of work was carried over into fiscal year 2008. As of January 2008, the work is scheduled to be completed in May 2008 on this fiscal year 2007 order received in November 2006. In March 2006, Letterkenny accepted an order totaling about $12.3 million that was financed with fiscal year 2006 Army procurement aircraft funds for the repair of 100 aviation ground power units. Initially, the Aviation and Missile Life Cycle Management Command programmed the repair of the 100 aviation ground power units in fiscal year 2006. However, in March, April, and May 2006, the depot had received only 5 of the 100 unserviceable assets. The production controller stated that the power units were shipped to the depot in small quantities from many locations all over the world, which delayed the receipt of all 100 units. Thus, the depot production department revised its repair scheduled to complete 10 power units a month through March 2007. According to the production controller, many of the power units were not repaired in accordance with the revised schedule because (1) not all of the power units were received in time to meet the revised production schedule and (2) there was a lack of power units in inventory to exchange with the deploying units. As a result, about $6.3 million and $1.1 million carried over into fiscal years 2007 and 2008, respectively. In February 2004, Anniston accepted three orders totaling about $296,000 that were financed with fiscal year 2004 Procurement of Weapons and Tracked Combat Vehicles appropriated funds to overhaul 39 hydraulic cylinders on each order. Although the work was originally scheduled to be completed in June 2004, the work was not completed because some of the unserviceable assets did not arrive at the depot. As of August 2007, or about 3.5 years later, 36 hydraulic cylinders had not arrived at the depot for repair. Consequently, about $83,000 of the $296,000 carried over into fiscal year 2008 on work that was originally planned to be completed in fiscal year 2004. In discussing this matter with the customer, TACOM Life Cycle Management Command, we asked them why it had not canceled the order since the depot did not receive some of the hydraulic cylinders after the appropriation financing the order had expired. Officials said that they did not want to cancel the order because they would lose the funds. After our discussion, the depot received 25 cylinder heads for two of the orders in the December 2007 and January 2008 time frame, and the depot completed the work on those assets in December 2007 or January 2008. As of January 2008, the depot has still not received 11 cylinder heads on this fiscal year 2004 order. In October 2006, Corpus Christi accepted an order from the Army Aviation and Missile Life Cycle Management Command to repair 150 T700 engine cold section modules totaling about $15.6 million. A depot official stated that the depot planned to complete the order by the end of October 2007. The depot initially expected to carryover 15 T700 engine cold section modules from fiscal year 2007 into fiscal year 2008 at an estimated value of approximately $1.6 million. However, primarily because of the lack of unserviceable assets to repair, the depot carried over 46 T700 engine cold section modules at an estimated value of $4.7 million—an increase in the depot’s carryover of approximately $3.1 million. The depot completed the order in December 2007. In order to manage unserviceable assets and minimize carryover, the depots took a number of actions on a daily, weekly, monthly, and quarterly basis. For example, on a daily basis, (1) programs were reviewed for asset availability and (2) if it was determined that there was a shortage of assets, the item manager was notified. On a weekly basis, schedules were adjusted based on requirements and asset availability. On a quarterly basis, in-process reviews were held with the depots and the life cycle management commands, and issues affecting production were discussed. Continuing problems in the Army depot maintenance group’s ability to control the growth of carryover has resulted in excess carryover amounts that tie up customer appropriations for long periods of time. Further, we noted the lack of transparency with the level of detail of carryover data reported to Congress for oversight purposes. Without increased management attention, Army depot maintenance carryover amounts will continue to escalate, as illustrated by the significant growth in carryover in fiscal years 2006 and 2007. Much of the growth in carryover results from the growth in new orders brought on by increased federal expenditures related to the war effort in Iraq and Afghanistan. Nonetheless, some of the factors that led to increased carryover are, in part, within DOD’s and, more specifically, the Army depots’ control. Most notably, the Army depots have not started orders within a few months of acceptance and completed them in a timely manner. While the Army’s initial actions in fiscal year 2008 to reduce carryover at the Army depots resulted in some improvement, these actions have not yet fully met the goals included in its carryover reduction plan. In order to (1) improve the reliability and level of detail of carryover amounts reported to Congress and DOD decision makers and (2) reduce carryover associated with the Army depot maintenance working capital fund activities, we are making eight recommendations to the Secretary of Defense. We recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) to take the following actions: Establish a mechanism to monitor whether activities are not following the existing July 2003 policy that prohibits the deobligating and reobligating of funds at year-end for the sole purpose of reducing carryover balances and take appropriate actions, such as reducing future funding designated for these activities, if they do not follow the policy. Establish procedures requiring evaluations of future exemption requests on carryover to consider the impact these requests have on the actual carryover balances reported to Congress and whether granting such exemptions substantially reduces the visibility over and financial incentive to resolve long-standing issues, such as spare parts problems. We recommend that the Secretary of Defense direct the Secretary of the Army to take the following actions: Direct the Army headquarters budget office to compare the amounts contained in the Army’s carryover reduction plan to reported actual execution data on a monthly basis to determine (1) if the depots met established targets and (2) if the overall plan’s execution has the desired effect of reducing fiscal year 2008 year-end carryover, and work with the Army Materiel Command and the Army depots to identify ways to further reduce fiscal year 2008 carryover if monthly revenue goals are not met. Establish procedures for separately identifying the allowable and reported actual amounts of carryover for the Army depot maintenance activities in the Army’s annual budget to Congress (as was done prior to fiscal year 2005). Issue guidance, in accordance with existing DOD-wide guidance, that prohibits the Army Industrial Operations activity group from deobligating reimbursable customer orders at the end of the fiscal year and reobligating them in the next fiscal year for the sole purpose of reducing carryover balances that are ultimately reported to Congress. Develop a mechanism to monitor the Army depot maintenance activities’ compliance with the requirements in DOD Financial Management Regulation 7000.14-R governing acceptance of orders, particularly when work is not expected to (1) begin without delay (usually within 90 days of acceptance) and (2) be completed within the normal production period for the specific work ordered. Establish procedures requiring Army headquarters and Army Materiel Command to compare budgeted orders to actual orders that the depots received from customers and consider these trends in developing the following year’s budget estimates on new orders to be received from customers. Develop quantifiable measures to determine the effectiveness of actions taken by the Army and DLA to resolve spare parts shortages, such as analyzing the information on customer orders with insufficient spare parts in the critical maintenance repair parts reactive system at the end of fiscal year 2008 and comparing the results to those of prior fiscal years. DOD provided written comments on a draft of this report. DOD concurred with six recommendations and partially concurred with the specific aspects of recommended actions for two. However, in its response, DOD cited actions under way, or planned, related to all eight recommendations, including establishing an Army program to monitor carryover information throughout the fiscal year; providing separate carryover rates for depot maintenance and ordnance in upcoming budgets of the President; issuing an Army memorandum emphasizing the department’s policy prohibiting the deobligating of funds late in the fiscal year and then reobligating the same funds in the following fiscal year in order to reduce carryover amounts; and developing a method that will identify the amount of carryover resulting from spare parts shortages. DOD partially concurred with two of our recommendations with respect to whether (1) it can establish a mechanism to detect manipulation of carryover balances and (2) that additional procedures are required to ensure that evaluations of future exemption requests consider the impact of granting such requests will have on congressional reporting. DOD agreed that the Army must comply with the departmental financial policy that prohibits deobligating and reobligating funds at year-end to reduce carryover balances, but stated that there is no cost-effective method to detect non-compliance. DOD stated it plans to reiterate its existing policy and re-instruct the components to verify compliance with this policy as part of their internal control reviews. It stated the Office of the Under Secretary of Defense (Comptroller) will require the Army to certify compliance with DOD regulations and will evaluate and take appropriate actions on any future violations of the regulations. These additional planned DOD actions are consistent with the intent of our recommendation to establish an oversight mechanism. On DOD’s partial concurrence with our recommendation to establish procedures requiring evaluations of future exemptions requests on carryover and to consider the impact these requests have on actual carryover balances, DOD stated that it partially concurred because it already has procedures in place. It stated that exemptions are given on a case-by-case basis and only for limited periods. In addition, DOD stated it plans to monitor and take appropriate actions on the Army’s efforts to reduce carryover caused by parts shortages. However, as discussed in our draft report, the exemptions do not provide the right incentives to correct long-standing problems associated with receiving orders from other services late in the fiscal year and program delays caused by long lead time material. Consequently, we continue to believe that DOD should direct the Under Secretary of Defense (Comptroller) to establish procedures requiring carryover-reporting exemption-request evaluations to consider the impact of granting such requests will have on carryover amounts reported to the Congress. Finally, exceeding the annual carryover ceilings has been a long-standing problem at DOD. The department and the services have policies, procedures, and regulations that, in our view, adequately establish carryover ceilings and how to stay within those limits. Effective service implementation and timely DOD monitoring of service action shortly before, immediately after, and throughout each fiscal year are key to achieving compliance with established carryover policies and procedures. Unless DOD implements effective controls to monitor the services’ actions, the Congress can not be assured that the department is truly committed to reducing the growth of excessive carryover. We are sending copies of this report to the Chairmen and Ranking Members of the Senate Committee on Armed Services; the Subcommittee on Defense, Senate Committee on Appropriations; the House Committee on Armed Services; the Subcommittee on Readiness, House Committee on Armed Services; the House Committee on Appropriations; and the Subcommittee on Defense, House Committee on Appropriations. We are also sending copies to the Secretary of Defense, Secretary of the Army, and other interested parties. Copies will be made available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact Paula M. Rascona at (202) 512-9095 or rasconap@gao.gov or William M. Solis at (202) 512-8365 or solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine the growth in reported total carryover from fiscal year 2004 through fiscal year 2007 and the actions the Army is taking to reduce the carryover, we obtained and analyzed Army depot maintenance reports that contained information on new order, revenue, and carryover data for the 4-year period. We also met with Army officials to discuss its plans for reducing carryover in fiscal year 2008 and obtained and analyzed the Army’s plans for reducing carryover. Further, we analyzed the Army’s plan and first quarter fiscal years 2007 and 2008 execution data to determine if the depots met their first quarter fiscal year 2008 targets. Finally, we met with officials at the five Army depots to determine (1) what specific actions the depots took to reduce carryover in the first quarter of fiscal year 2008; (2) if the depots did not meet the planned targets for the first quarter of fiscal year 2008, the reasons for missing the targets; and (3) whether the depots’ officials believe that they will meet production targets for the fiscal year. To determine whether reported carryover amounts exceeded carryover ceilings for fiscal years 2006 and 2007 and adjustments made to reduce those amounts, we obtained and analyzed the allowable amount of carryover and reported actual year-end carryover for those years. We focused on fiscal years 2006 and 2007 because this is the time period when the carryover significantly increased. We also identified and analyzed the amount of carryover the Army exempted from its carryover calculation that was approved by the Office of the Under Secretary of Defense (Comptroller) for fiscal years 2006 and 2007. When the reported actual carryover exceeded the carryover ceiling, we met with responsible officials at the Army depots, the Army Materiel Command, and Army headquarters to ascertain why the depots exceeded the ceiling. We reviewed our prior reports (GAO-01-559, GAO-05-441, and GAO-06-530) on carryover, which provided information on the allowable amount of carryover as well as reported actual year-end carryover data. Finally, we identified year-end transactions that reduced the dollar amount of reported actual carryover in September 2006 and reobligated these funds in the beginning of October. To determine the primary reasons for the increased carryover at the five Army depots, we met with Army headquarters budget officials and responsible budgeting, accounting, or production officials at the Army depots. Based on those discussions, we obtained information that affected carryover. First, we analyzed budgeted and reported actual new orders to determine if the Army underestimated the depots’ fiscal years 2006 and 2007 workloads. When large differences occurred between budgeted and reported actual new orders, we met with Army headquarters officials to determine the reasons for these differences. Second, we identified orders received by the depots late in the fiscal year to determine if these orders were contributing to the carryover. Third, we analyzed reports and data files that provide information on the status of production work at the depots to determine if there were parts shortages resulting in carryover. In performing this work, we met with Defense Logistics Agency officials at the depots to discuss problems with the Defense Logistics Agency providing spare parts to the depots. Fourth, we analyzed reports that provide information on the status of production work at the depots to determine if the lack of unserviceable assets to be repaired at the depots contributed to carryover. We performed our work at the headquarters of the Office of the Under Secretary of Defense (Comptroller) and the Office of the Secretary of the Army, Washington, D.C.; Army Materiel Command, Fort Belvoir, Virginia; the Tobyhanna Army Depot, Tobyhanna, Pennsylvania; the Letterkenny Army Depot, Chambersburg, Pennsylvania; the Corpus Christi Army Depot, Corpus Christi, Texas; the Anniston Army Depot, Anniston, Alabama; and the Red River Army Depot, Texarkana, Texas. We conducted this performance audit from July 2007 through July 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Most of the financial information in this report was obtained from official Army budget documents and accounting reports. To assess the reliability of the data, we (1) reviewed and analyzed the factors used in calculating carryover, (2) interviewed Army officials knowledgeable about the carryover data, (3) reviewed GAO reports on Army depot maintenance activities, and (4) reviewed orders customers submitted to the depots to determine if they were adequately supported by documentation. We requested comments on a draft of this report from the Secretary of Defense or his designee. The Under Secretary of Defense (Deputy Comptroller) provided written comments, which are presented in the Agency Comments and Our Evaluation section of this report and are reprinted in appendix II. In addition to the contacts named above, Greg Pugnetti, Assistant Director; Richard Cambosos; Francine DelVecchio; Steve Donahue; Keith McDaniel; and Hal Santarelli made key contributions to this report. | The five Army depot maintenance activities support combat readiness by providing services to keep Army units operating worldwide. From fiscal years 2004 through 2007, the amount of new orders received to perform work increased 100 percent from $2.6 billion to $5.2 billion. The number of new orders is a factor in the amount of work the depots carry over from one fiscal year to the next. While past congressional defense committees recognize the need for carryover, the committees have raised concerns that carryover may be more than needed. GAO was asked to determine (1) the growth in reported total carryover from fiscal years 2004 through 2007 and the actions the Army is taking to reduce the carryover, (2) whether reported carryover amounts exceeded carryover ceilings for fiscal years 2006 and 2007 and adjustments made to reduce those amounts, and (3) the primary reasons for the increased carryover at the five Army depots. GAO analyzed reported carryover and related data at the five depots. From fiscal years2004 through 2007, the Army depots' total carryover significantly increased from $1.1 billion to $2.7 billion--about 7.6 months of work. The amount of carryover increased because new orders received (about $9.5 billion) by the depots significantly outpaced the work performed (about $7.8 billion) in fiscal years 2006 and 2007. GAO analysis of the Army's plan to reduce carryover showed that the depots performed $293 million more work in the first 3 months of fiscal year 2008 than they performed during the same period a year earlier, but the depots missed their planned goal by $173 million. The Army depots reported that they were under the carryover ceiling by $67 million in fiscal year 2006 but over the ceiling by $96.8 million in fiscal year 2007. GAO identified two factors that affected reported carryover amounts. First, the Army Materiel Command directed the Tobyhanna Army Depot to deobligate $30 million at the end of fiscal year 2006 and reobligate the same amount at the beginning of the next fiscal year, which artificially lowered reported carryover and was not in accordance with existing DOD policy. Second, the Army excluded about $299.7 million in fiscal year 2007 orders from the carryover calculations. The exemptions for fourth quarter orders from other services and long lead time material did not provide the right incentives for DOD to resolve long-standing problems. GAO analysis of reports and discussions with Army officials identified four primary reasons for growth in carryover: (1) the Army depot maintenance budget underestimated the amount of new orders during fiscal years 2006 and 2007 by about $1.7 billion and $1.5 billion, respectively; (2) the depots accepted orders late in the fiscal year that generally could not be completed by the end of the fiscal year; (3) the depots experienced parts shortages; and (4) the depots did not receive assets that had been scheduled for repair. |
With the passage of ATSA in November 2001, TSA assumed from the Federal Aviation Administration (FAA) responsibility for securing the nation’s civil aviation system. In accordance with ATSA, TSA is responsible for the procurement, installation, and maintenance of explosive detection systems, including EDS and ETD, used to screen checked baggage for explosives (see figs. 1 and 2) at TSA-regulated airports. EDS machines identify suspicious items or anomalies in checked baggage that could indicate the presence of explosives or detonation devices. At airports with EDS, EDS machines are generally employed for primary screening of checked baggage while ETD machines are used for secondary screening to help resolve questions raised by EDS screening. At airports without EDS, ETD machines are used as the primary method for screening checked baggage. TSA deploys EDS machines in stand-alone and in-line configurations. In a stand-alone configuration, checked baggage is manually loaded and unloaded by screeners (see fig. 1). In contrast, an in-line configuration integrates EDS machines with a baggage handling system—a conveyor system that transports and sorts baggage from the ticket counter through the baggage screening system. (See fig. 3, which shows an in-line configuration with three EDS machines.) Generally, an in-line checked baggage inspection system employs three levels of screening (see fig. 4). EDS machines perform automated (Level 1) screening. If the EDS machine is unable to clear a bag, it sends an alarm to a screener who performs a secondary (Level 2) inspection known as On-Screen Resolution by reviewing an image of the contents of the bag via computer monitor. If the screener cannot resolve the alarm using on-screen resolution tools, the bag goes to the Checked Baggage Resolution Area (Level 3) where another screener will perform manual inspection of the bag assisted by an ETD machine. TSA officials stated that deployment of an integrated, centralized in-line system of EDS machines can enhance security, increase screening efficiencies, and lower screening costs by, among other things, reducing the number of screeners needed to conduct baggage screening and reducing work-related injuries caused by lifting heavy bags. Installing an in-line system can require modification of an airport terminal, including removal of the existing system, installation of a new baggage handling system and EDS machines, and the use of an interim solution to screen checked baggage while the in-line system is built. TSA estimates that depending on the size and complexity of an in-line project, installing an in- line system can take one to four years at larger (category X and I) airports. In 2005, we reported that although TSA made substantial progress in installing EDS machines, the agency had not conducted a systematic, prospective analysis to determine which airports could achieve long-term savings and improve efficiencies and security by installing in-line systems or, where in-line systems may not be economically justifiable, by making greater use of stand-alone EDS rather than relying on the labor-intensive and less efficient ETD screening processes. We recommended that TSA systematically evaluate baggage screening needs at airports, including identifying and prioritizing the airports where the benefits—such as cost savings of screening operations and improved security—of replacing stand-alone baggage screening machines with in-line systems are likely to exceed the costs of the systems. TSA concurred and in response released its Strategic Planning Framework in February 2006, which identified and prioritized airports based on an analysis of several factors, including security risk and the amount of estimated cost savings. In March 2011, we reported that by continuing to replace or modify older baggage screening systems with more efficient solutions, including in-line screening systems, TSA could continue to eliminate baggage screener TSA agreed that the deployment of more efficient systems positions. offers potential personnel cost savings to the federal government. GAO-11-318SP. airports, will be operated at the levels in established requirements and we recommended that TSA develop a reliable schedule for the EBSP. DHS concurred with these recommendations and has initiated actions to implement them. In February 2012, we reported that we continue to believe that TSA might achieve savings in screening personnel costs by continuing to replace or modify older checked baggage screening systems with more efficient solutions, including in-line screening systems, to the extent possible. TSA reported that since the issuance of GAO’s 2011 report, it had replaced 60 stand-alone checked baggage screening machines with more efficient in-line screening systems. Since fiscal year 2006, about $6.8 billion has been made available to TSA for activities related to checked baggage screening (see table 1), making the EBSP one of DHS’s largest acquisition programs. Following the terrorist attacks of September 11, 2001, and the enactment of ATSA, airports relied upon various sources of funding to support security-related capital improvement projects. For example, as enacted, ATSA authorized the use of Airport Improvement Program (AIP) funds for projects to support the installation of explosive detection systems. Subsequently enacted statutes, such as the Vision 100—Century of Aviation Reauthorization Act (Vision 100), however, either limited or precluded the use of AIP to fund projects related to the installation of in- line systems. TSA, which is solely responsible for procuring and deploying equipment to screen checked baggage for explosives, also provides funding in support of related facility modifications. The Consolidated Appropriations Resolution, 2003, first authorized the use of LOIs by TSA for airport facility modification projects related to the installation of in-line baggage screening systems. Although not a binding commitment of federal funding, LOIs are agreements providing that TSA will reimburse airports or airlines for a specified percentage of an eligible project’s cost, subject to the availability of appropriations. This in turn enables an airport to proceed with a project because the airport and any investors are aware that the agreed-upon percentage of allowable costs will likely be reimbursed. The airport or airline is responsible for its share of the total funding needed to complete the project and generally must be capable of funding the project in its entirety. From fiscal years 2003 through 2007, TSA entered into 8 LOI agreements covering 9 airports. Pursuant to the law then in effect, these LOI agreements provided for a 75 percent federal cost share of allowable project costs. Beginning in fiscal year 2008 and in accordance with Vision 100, any LOI entered into by TSA was to reflect a 90 percent federal cost share. Since 2008, TSA has entered into 4 more LOIs at a 90 percent federal cost share. As of December 13, 2011, TSA reported that its net cumulative obligations for all 12 LOIs were $1.46 billion. TSA also uses OTAs to support airport facility modification projects related to the installation of checked baggage screening equipment. TSA describes OTAs, which have become the primary administrative vehicles through which TSA financially supports such projects, generally as single-year reimbursable agreements (in contrast to the multiyear LOI agreements). According to TSA, OTAs take many forms and are generally not required to comply with federal laws and regulations that apply to contracts or cooperative agreements, such as the Federal Acquisitions Regulations, thus enabling the parties to negotiate provisions that are mutually agreeable. According to TSA, the federal cost share applied to OTAs may be negotiated, but since fiscal year 2008 TSA has generally followed the federal cost share applicable to LOIs. As of the end of fiscal year 2011, TSA had used at least 150 OTAs to support airport facility modifications related to the installation of in-line systems. According to TSA, these OTAs have reflected federal cost shares ranging from 50 to100 percent. As of December 13, 2011, TSA reported that its net cumulative obligations for all OTAs were $1.74 billion. TSA reports that 337 of 446 airports (76 percent) the agency regulates for security have optimal baggage screening systems. The remaining 109 of 446 airports (24 percent) do not yet have optimal baggage screening systems in all screening areas. To be considered an airport with an optimal baggage screening system, as TSA considers it and as we define it for purposes of this report, an airport must have completed installation and activation of the in-line or stand-alone systems that best fits the airport’s screening needs without relying on temporary stand-alone systems.may have a mix of explosive detection systems (EDS or ETD) and configurations (in-line or stand-alone systems), depending on the airport’s needs. TSA officials told us that they plan to deploy equipment for an additional 201 in-line systems in the future, which will include the Thus, an airport with an optimal baggage screening system purchase of an estimated 685 new EDS machines for installation at airports that are not screening with optimal configurations. TSA aims to complete its efforts to deploy optimal screening systems by using EDS machines as the primary means for screening checked baggage at all category X, I, II, and III airports while continuing to use ETD machines as the primary means at category IV airports. Additionally, TSA plans to deploy EDS machines in in-line configurations at all category X and I airports and in-line or stand-alone configurations at category II and III airports. At each of the 10 airports we visited, we observed distinct checked baggage screening needs, based on an airport’s terminal configuration and the number of passenger boardings. We discussed an airport’s willingness and financial ability to pay for facility modifications required to install in-line systems. For each of these airports, officials and engineers provided documentation (for example, drawings and blueprints) on distinct facility modification projects to accommodate baggage screening system upgrades. According to airport officials, TSA and airport officials work together to determine the most appropriate baggage screening configuration based on an airport’s needs. Of the 337 airports where baggage screening systems are optimal and no longer using temporary solutions, 55 airports use EDS in-line systems exclusively for their primary checked baggage screening needs, while 92 airports use EDS stand-alone machines only, and 167 use ETD systems exclusively. The remaining 23 airports have a mix of systems. See figure 5 for additional details on airport screening configurations. Also, see appendix II for more details on the status of efforts to optimize checked baggage screening systems. Of the 337 airports with optimal baggage screening systems, larger airports were less likely to have completed optimal solutions than smaller airports. Specifically, 36 percent (10 of 28) of the category X airports and 49 percent (28 of 57) of the category I airports were considered to have optimal solutions, whereas 60 percent (46 of 77) of the category II airports, 76 percent (96 of 127) of the category III airports, and all of the 157 of the category IV airports were considered to be optimally configured, as shown in figure 6. According to TSA and airport officials, this is because the larger airports generally need to install more complex in-line systems, which are more time and resource intensive to install and often require a significant amount of airport infrastructure modification and construction, while the smaller airports, particularly the category IV airports that rely on the smaller ETD machines, require far less time and resources to install these systems. Moreover, TSA officials stated that the in-line systems that best meet the screening needs of larger airports take longer to plan and build because of the added complexity and scale of the upgrades and the coordination required among multiple stakeholders. TSA anticipates that in the next 5 years about 60 percent (1,153 of 1,933) of the EDS machines will reach the end of their useful life of about 10 years and will need to be replaced, as shown in figure 7. As a result, to ensure that 100 percent of checked bags continue to be screened as required by ATSA, TSA revised its focus from replacing stand-alone EDS in airport lobbies with in-line systems to replacing its aging fleet of EDS and ETD machines, a process it calls recapitalization. However, TSA reported that it will continue to collaborate with airports or airlines to install optimal in-line systems if it coincides with efforts to recapitalize aging EDS machines or if the existing in-line systems do not meet current TSA standards. In August 2011, TSA issued its EDS and ETD Recapitalization and Optimization Plan, which establishes the method and criteria for prioritizing when and how EBSP recapitalization and optimization will occur. The plan notes that many in-line recapitalization projects will include an optimization component. For example, a number of early in- line screening systems are likely to require optimization, among other things, to improve performance, increase efficiency, and reduce operating costs. At one airport we visited with an early in-line system, we observed a baggage handling system that needed to be replaced because it had sharp curves and steep grades that led to an excessive number of errors and jams. The airport was involved in TSA’s recapitalization pilot program, and airport officials anticipated TSA supporting the optimization of this system as part of recapitalization. Consistent with current law, TSA enters into reimbursable agreements through which it generally funds 90 percent of the cost of an eligible airport facility modification project to support the installation of an optimal system, with an airport or airline funding the remaining 10 percent of the project’s cost. If the federal cost share for airport facility modification projects is reduced, TSA may be able to use available funding to install a greater number of optimal solutions than currently anticipated. From fiscal year 2003 through fiscal year 2007, in accordance with law then in effect, TSA entered into LOIs at a 75 percent federal cost share. Looking forward, we used TSA’s projections of airport modification costs for each fiscal year, 2012 through 2030, as presented in its latest LCCE, and estimated the amount of expenditures for airport modifications that could be shifted from TSA to airports if the federal government’s cost share were reduced from the current 90 percent to 75 percent. As discussed later in this report, we found that TSA’s LCCE data are of questionable reliability for a precise estimate. However, the data can serve to provide a rough indication of how much TSA could save if the cost share were adjusted. Thus, we estimate that if the federal cost share for such projects returned to the 75 percent TSA applied to many of the reimbursable agreements it entered into prior to fiscal year 2008, rather than the current federal share of 90 percent, TSA’s anticipated expenditures for these modifications would be roughly $300 million less through fiscal year 2030. TSA had previously determined that assigning costs among industry stakeholders and the nation as a whole is difficult because operational improvements to the baggage handling systems and national security benefits are difficult to quantify. This, in turn, makes it difficult to develop a cost share formula that would allow TSA to allocate costs in proportion to benefits. Consistent with the Intelligence Reform and Terrorism Prevention Act of 2004, TSA commissioned the 2006 Baggage Screening Investment Study Working Group to prepare a report for the Aviation Security Advisory Committee, which examined what an appropriate federal government/airport cost share should be for the installation of checked baggage screening equipment. The working group, which consisted of over 60 members representing, among others, TSA, FAA, airports, airlines, designers of baggage handling systems, and financial institutions, were unable to develop a consensus on an appropriate cost share formula, in large part because of the difficulties of measuring benefits, differing views on the federal responsibility for funding capital investments related to baggage screening, and the competing demands on the federal budget. As a result, potential cost share options were not submitted to the Congress as part of DHS’s fiscal year 2006 budget submission. Representatives of all 10 airports we visited told us that they benefit from the installation of integrated, in-line baggage screening systems. Specifically, officials from 9 of the 10 airports cited the reduction of passenger congestion in airport terminals because stand-alone EDS machines were removed from the lobby or ticketing areas, officials from half of the airports noted that in-line systems reduce the number of lost or stolen bags by creating a streamlined process for moving checked baggage directly from where baggage is checked by the passenger and airline to the aircraft, and officials from 3 of 10 airports noted that in-line systems facilitate airport growth. However, for various reasons, officials representing 8 of the 10 airports opposed a reduction in the federal cost share that would increase airports’ share of modifications costs.the airports stated the following four concerns: Specifically, officials from half of Assuming a larger share of airport modification costs would pose hardships because of current fiscal or funding constraints. Airports incur additional (that is, nonallowable) costs that are necessary to building an in-line system, but which TSA will not reimburse. Examples of the nonallowable costs the airports cited include the costs of designing an in-line system and constructing rooms in which screeners manually screen bags that have not previously been cleared. As a result, officials from 5 of the 10 airports we spoke with told us that after necessary, but nonallowable, costs were included, the airports were already paying for more than 10 percent of the modification costs associated with in-line systems. Airports have a backlog of capital projects or would rather fund projects that will produce additional revenue, such as parking garages or larger areas for concessions, than projects that are related to TSA’s security responsibilities. TSA will be the primary beneficiary of in-line baggage screening systems because the integration and consolidation of these systems will enable TSA to reduce the number of baggage screeners and provide TSA with other operational efficiencies. TSA’s August 25, 2011, life cycle cost estimate identified a total program cost for EBSP of $49.2 billion through fiscal year 2030. The $49.2 billion includes $2.65 billion for EBSP program operation and management; $11.03 billion for capital costs, including, among other things, recapitalization and facility modifications for optimization; $14.89 billion for operations and maintenance of equipment; $18.42 billion for screener salaries; and $2.22 billion for research and development and other miscellaneous related costs. Additionally, TSA officials reported that the program is expected to provide life cycle cost savings of $537 million. However, we found that the cost estimates are of questionable reliability for a precise estimate. TSA continues to revise its life cycle cost estimates. For example, its August 2011 EBSP life cycle cost estimate report stated that new requirements, including recapitalization and upgrading the efficiency of early in-line systems, will likely lead to a gap between anticipated program needs and anticipated funding during fiscal years 2012 to 2017, totaling up to $436 million. However, in December 2011 TSA officials told us the DHS Acquisition Review Board had requested that TSA revise the EBSP funding plans and projections to more accurately reflect current budget constraints and reduced funding available for the program. According to TSA officials, they plan to complete the revised EBSP planning estimates and funding projections to help eliminate the potential funding gap before the next Acquisition Review Board meeting in May 2012. EBSP senior program officials explained that TSA will address the potential funding gap by (1) controlling the costs associated with engineering initiatives and improvements in technology performance, (2) delaying funding of some new in-line systems and recapitalization projects, and (3) extending the useful life of equipment beyond 10 years in cases where replacement could be delayed. Although TSA’s methods for developing its LCCE reflect features of best practices, its methods do not fully adhere to these practices. As highlighted in our past work, a high-quality, reliable cost estimation process provides a sound basis for making accurate and well-informed decisions about resource investments, budgets, assessments of progress, and accountability for results and thus is critical to the success of a program. According to the Office of Management and Budget (OMB), federal agencies must maintain current and well-documented estimates of program costs, and these estimates must encompass the program’s full life cycle.increased risk of making poorly informed investment decisions, securing Without such an estimate, agencies are at insufficient resources to effectively execute defined program plans and schedules, and experiencing program cost and schedule overruns and performance shortfalls. As highlighted in our Cost Estimating and Assessment Guide, a reliable cost estimate has four characteristics—it is comprehensive, well documented, accurate, and credible. We reviewed TSA’s cost estimation procedures for the EBSP and assessed the extent to which the agency met the four characteristics, as shown in table 2. Our assessment showed that TSA’s EBSP estimates partially met three characteristics and minimally met one characteristic of a reliable cost estimate. Specifically, TSA’s cost estimate was as follows: Partially comprehensive because the estimate defines the program, reflects the current schedule, is technically reasonable, includes assumptions identified by a team of personnel and engineers, and provides risks related to detection standards. However, the cost estimate is not considered fully comprehensive because it does not incorporate costs associated with all security threats, lacks a detailed product-oriented work breakdown structure that covers the entire scope of work, and lacks a single technical baseline. Without fully accounting for life cycle costs, management may have difficulty successfully planning program resource requirements and making wise decisions. Further, the program lacks a defined end date. A reasonable criterion is that the estimate capture at least 10 years of costs beyond the planned full operational capability date—the date at which optimal systems are fully deployed and operating at all locations. However, we cannot determine whether the time frame is sufficient because we have not received documentation to support the program’s official, planned full operational capability date. According to TSA, the EBSP does not have a defined end date for procurement because maintaining compliance with the 100 percent screening mandate established by ATSA requires TSA to continuously procure and replace equipment as it reaches the end of its useful life. TSA also believes that it is following DHS acquisition guidance outlined under the acquisition decision memorandum dated January 13, 2005, for estimating threshold dates. Nevertheless, the EBSP still lacks a defined, official full operational capability date, without which we can neither determine whether the time frame used in the LCCE is sufficient nor verify that the life cycle cost estimate is fully comprehensive. Partially documented because TSA used relevant data to help develop the estimate. For example, TSA’s estimated price for the equipment is based on existing contracts for EBSP equipment purchases, maintenance costs, LOI agreements, and OTAs. TSA also provided narratives, briefings, and documents to describe the program requirements, purpose, technical characteristics, and acquisition strategy, and explained how calculations were performed. However, TSA did not adequately document many assumptions or methodologies underlying its cost model to the extent that would allow someone unfamiliar with the cost estimate, using only the available documentation, to easily re-create the estimate. For example, equipment purchase and hardware upgrade costs were based in part on estimates from engineers and contract specialists rather than historical or analogous data. Unless ground rules and assumptions are clearly documented, the cost estimate will not have a basis for areas of potential risk to be resolved. In addition, TSA also provided little or no evidence that the assumptions and methodologies underlying the cost estimate were approved by management.officials agreed that additional documentation could improve the outside reviewers’ ability to evaluate the estimate. According to TSA, the EBSP plans to follow DHS guidance to implement software dedicated to the estimation, documentation, and reporting of costs during all phases of the EBSP program life cycle to help address documentation concerns. Partially accurate because while the estimate is properly adjusted for inflation, differences between planned and actual costs are not fully documented, explained, or reviewed. In addition, we cannot determine whether the cost estimate is unbiased—that is, neither overly conservative nor overly optimistic—because the program did not perform an uncertainty analysis that meets best practices. While TSA agreed that costs should be documented, it could not explain why the differences between planned and actual costs were not being fully documented, explained, or reviewed. Minimally credible because while TSA identified changes in cost for each scenario against the baseline and developed a limited risk analysis, TSA did not complete other relevant activities to ensure that the estimate accounts for bias and uncertainty. For example, the agency did not (1) document in detail the assumptions and parameters associated with its sensitivity analysis, such as detailed calculations on how each parameter was varied between its minimum and maximum values; (2) conduct a fully objective uncertainty analysis that derives the point estimate percentile rather than assumes it; (3) cross-check major cost elements to see whether results are similar; and (4) use an independent cost estimate to validate the cost estimate. Given the important role that an independent cost estimate provides in developing an objective and unbiased assessment of whether the program estimate can be achieved, developing and using an independent cost estimate would provide decision makers with insight into a program’s potential costs and reduce the risk of underfunding a program. TSA officials told us they did not perform a complete uncertainty analysis because it was too costly and time consuming. TSA officials concurred that an independent cost estimate was not done for the EBSP and agreed that completing an independent estimate would be helpful. Regarding the credibility of TSA’s estimates, our past work has shown that program cost estimates that are independently validated help improve the confidence that the estimate is credible, are needed for making timely and informed budget decisions, and help reduce the likelihood of unanticipated program cost growth. Our prior reviews of several DHS programs, including the EBSP, have also shown that if cost estimates are not validated in accordance with DHS acquisition management directive requirements at the start of an acquisition program, it is difficult to assess whether a program is being deployed within planned budgets. Further, DHS’s Acquisition Management Directive requires major program cost estimates to be validated early in the decision-making process, before programs can receive authorization for acquisition contracts at the DHS Acquisition Review Board meeting. However, we found that since 2008 the DHS Acquisition Review Board allowed the EBSP to proceed with acquisition contracts before the LCCE was independently validated by DHS, which is inconsistent with DHS policy. In May 2010, the DHS Cost Analysis Division reviewed the EBSP life cycle cost estimate and found that it needed more comprehensive data and that its accuracy could not be determined. As of December 2011, the estimate had not been independently validated by DHS. According to TSA, DHS guidance that required validation of cost estimates was not in place until November 2008. The interim DHS directive for LCCEs and validation was established in November 2008, and the Cost Analysis Division was designated the authority responsible for independent cost estimates on January 24, 2010. However, the DHS Acquisition Review Board did not request a validated LCCE until February 25, 2011. TSA officials also stated that since the program’s budget circumstances changed over time, the cost estimate needed to be revised to reflect a constrained budget for fiscal year 2013 and other program changes. For example, TSA officials stated that new requirements for the EBSP, particularly the shift in focus to recapitalization of the aging EDS and ETD fleet and upgrades to in-line baggage screening systems and threat detection levels, contributed to continued program cost growth and delayed efforts to validate the latest LCCEs. As a result, the federal government’s portion of the cost estimates for the EBSP has increased from approximately $20.5 billion in fiscal year 2010 to $25.4 billion in fiscal year 2011—a 24 percent increase.DHS on the validation of the cost estimate for the next Acquisition Review Board meeting scheduled for May 2012. TSA is currently working with According to our 2009 Cost Estimating and Assessment Guide, endorsed by the OMB and DHS, cost estimates are integral to determining and communicating a realistic view of likely cost outcomes that can be used to plan the work necessary to develop, produce, and support a program. Taking steps to ensure that its cost estimates for the EBSP conform to cost estimating best practices will help provide TSA with a sound basis for understanding how the program can be sustained in future years. Another foundation for making informed budget decisions is the acquisition program baseline, which is to document a program’s critical cost elements, including acquisition costs and life cycle costs. According to DHS’s acquisition guidance, the program baseline is the contract between the program and departmental oversight officials and must be established at program start to document the program’s expected cost, deployment schedule, and technical performance. Establishing such a baseline at program start is important for defining the program’s scope, assessing whether all life cycle costs are properly calculated, and measuring how well the program is meeting its goals. As we have previously reported, establishing realistic original baseline estimates is important for minimizing the risks of poorly defined requirements and achieving better program outcomes. By tracking and measuring actual program performance against this formal baseline, management can be alerted to potential problems, such as cost growth or changing requirements, and has the ability to take early corrective action. However, since the inception of the program more than eight years ago, the EBSP has not had a DHS approved acquisition program baseline and DHS did not require TSA to complete an acquisition program baseline until November 2008. An approved baseline will provide DHS with additional assurances that TSA’s approach is appropriate and that the capabilities being pursued are worth the expected costs. DHS officials told us that several reorganizations of DHS offices responsible for approving the baseline and a lack of functional expertise within the agency contributed to further delays in approving the EBSP acquisition program baseline. According to TSA officials, they have twice submitted an acquisition program baseline to DHS for approval. In November 2009 and February 2011, TSA requested approval of a program baseline, but according to DHS officials TSA did not have a fully developed life cycle cost estimate. In November 2011, DHS told TSA that it needed to revise the life cycle cost estimates as well as its procurement and deployment schedules to reflect budget constraints. DHS officials told us that they could not approve the acquisition program baseline as written because TSA’s estimates were significantly over budget. TSA officials stated that TSA is currently working with DHS to amend the draft program baseline for approval. TSA officials stated that they plan to resubmit the revised acquisition program baseline before the next Acquisition Review Board meeting in May 2012. Establishing and approving a program baseline, as DHS and TSA currently plan to do for the EBSP, could help DHS assess the program’s progress in meeting its goals and achieve better program outcomes. TSA’s EBSP is aimed at increasing airport screening efficiencies and addressing the continuing threat of explosives concealed in checked baggage, at a total estimated cost to the federal government and the private sector of close to $50 billion through fiscal year 2030. Given the size of the federal investment, it is vital that TSA ensures effective stewardship over these resources and conveys useful information to the Congress about the scope and cost of the program. However, the limitations we identified in TSA’s EBSP cost estimates raise questions about their reliability. Taking steps to ensure that its cost estimates meet the four characteristics for high-quality and reliable cost estimates would provide TSA with increased assurance about the reliability of the estimated total cost of the program and better position it to account for all resources and associated costs required to develop, implement, and sustain the EBSP. In order to strengthen the credibility, comprehensiveness, and reliability of TSA’s cost estimates and related savings estimates for the EBSP, we recommend that the Administrator of TSA ensure that its life cycle cost estimates conform to cost estimating best practices. On March 30, 2012, we provided a draft of this report to DHS for its review and comment. DHS provided written comments on April 19, 2012, which are reprinted in appendix IV. In its written comments, DHS concurred with our recommendation that TSA ensure that its life cycle cost estimates conform to cost estimating best practices and discussed efforts under way to address it. DHS further acknowledged the importance of producing life cycle cost estimates that are comprehensive, well documented, accurate, and credible so that they can be used to support DHS funding and budget decisions. DHS also noted that after conducting an internal review, TSA is implementing a management directive that applies DHS guidance and the best practices from the GAO Cost Estimating and Assessment Guide. As part of this effort, TSA is (1) establishing a working group and executive board to review program cost estimates to validate whether the estimates are credible and affordable, (2) requiring all life cycle cost estimates to be approved by DHS to ensure consistency and quality across TSA programs, (3) purchasing and training its employees on specialized cost estimating software, and (4) initiating hiring actions to hire additional cost estimating personnel. TSA believes that this will institutionalize cost estimating best practices within the organization and ultimately allow TSA and the Department to make better-informed investment decisions. These are positive steps; however, additional time will be needed to assess whether they have been fully and consistently implemented in accordance with GAO best practices. We are sending copies of this report to the Secretary of Homeland Security, the Assistant Secretary of the Transportation Security Administration, and appropriate congressional committees. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or lords@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. We examined the Department of Homeland Security’s (DHS) Transportation Security Administration’s (TSA) operation of the Electronic Baggage Screening Program (EBSP) to assess the program’s current status, alternative cost sharing options, and cost estimates. Specifically, we addressed the following questions: What is the status of TSA’s efforts to install optimal checked baggage screening systems in collaboration with airports? How would reducing the current federal cost share for eligible airport modification projects from 90 percent to its previous level of 75 percent affect the amount that TSA pays for these modifications, and what benefits, if any, do airports report receiving from in-line baggage screening systems? To what extent are TSA’s cost estimation procedures consistent with best practices and is TSA’s acquisition baseline consistent with DHS guidance? To determine the status of TSA’s efforts to install optimal checked baggage screening systems, we obtained data as of December 2011 and January 2012 from TSA, such as the current number of airports with at least one in-line system, and the number of airports with optimal systems. We also obtained data on the number of airports configured exclusively with in-line screening, the number of airports configured with a mix in-line and stand-alone explosives detection systems (EDS), and the number of airports using only stand-alone EDS for the same time period. We also collected data on the overall number of operational in-line systems and EDS and explosives trace detection (ETD) machines as of December 2011. We reviewed documentation from TSA’s EBSP, including the EBSP strategic plans for fiscal years 2006, 2008, and 2009, and the 2011 EDS and ETD Recapitalization and Optimization Plan. We assessed the reliability of the various data TSA provided about airports, including the number of TSA-regulated airports (by category) and the numbers of airports with the different configurations of baggage screening systems (in-line or stand-alone), and the investment and budget expenditure dollar values in the letters of intent and other transaction agreements by questioning cognizant TSA officials and obtaining extensive documentation about these various data. We found these data to be sufficiently reliable for the purposes of our report. To determine how reducing the federal cost share from the current 90 percent to the previous federal cost share of 75 percent for eligible airport modification projects may affect the amount that TSA pays for these modifications, we calculated estimates based on TSA’s August 2011 projections of how much airport modifications will cost in the future. These projections represent TSA’s best estimate for how much it will spend on airport modifications for in-line systems each year from fiscal years 2012 through 2030. We also reviewed the reliability of the cost estimate by evaluating how well TSA followed best practices detailed in the GAO Cost Estimating and Assessment Guide (see below). While TSA’s process for estimating costs only partially meets the characteristics of a reliable cost estimate, the data can serve to provide a rough indication of how much could be saved by reducing the federal cost share for optimization. To assess how the installation of in-line baggage handling systems may benefit the airports that receive them, we visited a nonrandom sample of 10 airports. We chose these airports based on the size of airport, type of checked baggage screening systems installed, and status of airport facility modification completion. We discussed cost share with officials at each airport representing the airport authority, tenant airlines, and TSA’s Federal Security Director. In addition, we interviewed officials from the largest industry associations that represent airport executives, airports and airlines (the American Association of Airport Executives, the Airports Council International North America, and the Air Transport Association). We also interviewed an official from of the Association for Airline Passenger Rights. In addition, we discussed potential benefits of optimization with aviation security experts. These results cannot be generalized to the entire industry, but did provide broader perspectives on the issues and costs associated with the EBSP. To assess the extent to which TSA’s methods for estimating costs for EBSP are consistent with best practices and its acquisition program baseline is consistent with DHS guidance, we analyzed TSA’s most recent life cycle cost estimate and recapitalization report finalized in August 2011. Specifically, we used best practices in the GAO Cost Estimating and Assessment Guide to evaluate TSA’s estimating methodologies, assumptions, and results to assess whether the official cost estimates were comprehensive (i.e., includes all costs), accurate, well documented, and credible. Our Cost Estimating and Assessment Guide considers an estimate to be comprehensive if its level of detail ensures that all pertinent costs are included and no costs are double counted; accurate if it is not overly conservative, is based on an assessment of the most likely costs, and is adjusted properly for inflation; well documented if the estimate can be easily repeated or updated and can be traced to original sources through auditing; and credible if the estimate has been cross-checked with an independent cost estimate and a level of uncertainty associated with the estimate has been identified. We also interviewed the TSA EBSP office’s cost estimating team and its consultants to obtain a detailed understanding of their methodology, the cost model, and data. In doing so, we interviewed cognizant program officials, including the Program Manager and cost analysis team, regarding their respective roles, responsibilities, and actions in developing the cost estimate, reviewing it, or both. We examined data reliability of the cost estimate by doing the following: Obtaining cost estimates and reviewing how each major element was calculated with an emphasis on the basis for the estimate and strength and quality of the supporting documentation. Verifying that the parameters used to create each estimate were valid and applicable by comparing to available cost estimating references, posing questions to the cost estimators for clarification, and relying on other technical sources for cross-checking. Verifying that calculations were correct for each major element. Verifying that escalation was properly applied and elements rolled up accurately to the overall program cost estimate. We reviewed TSA’s EBSP cost estimates to determine whether the characteristic was (1) not met if the agency provided no evidence that satisfied any portion of the criterion, (2) minimally met if the agency provided evidence that satisfied less than one-half of the criterion, (3) partially met if the agency provided evidence that satisfied about one- half of the criterion, (4) substantially met if the agency provided evidence that satisfied more than one-half of the criterion, and (5) met if the agency provided complete evidence that satisfied the entire criterion. One analyst assigned a value ranging from 1 to 5 indicating the extent to which the agencies met each best practice and averaged the values for the practices that were associated with each characteristic. A second analyst independently verified the results. We also interviewed program officials from TSA and DHS responsible for each cost estimate about the estimate’s derivation. In doing so, we independently assessed the cost estimates for the current EBSP, as provided to us in August 2011, against our best practices. To understand how TSA is working to make better informed budget decisions and complying with DHS guidance to develop an acquisition program baseline, we reviewed DHS guidance on acquisitions and documents related to TSA’s efforts to coordinate with DHS on developing an acquisition program baseline for EBSP, which DHS considers the contract between the program and departmental oversight officials to document the program’s expected cost, deployment schedule, and technical performance. We also reviewed EBSP Acquisition Review Board decisions and relevant acquisition decision memos during the period 2005 through 2011. Additionally, we interviewed TSA and DHS officials, including officials in the TSA Chief Financial Officer’s office and DHS’s Program Accountability and Risk Management Office to identify what procedures have been put in place to approve the acquisition program baseline. To gain a better understanding of issues across all of our objectives, including the development of optimal systems, status of implementation, funding challenges, impact of a change in the cost share formula at the airport level, and cost estimation, we conducted site visits to California, New York, Massachusetts, Washington, D.C., and Florida to interview local airport officials, regional TSA officials, and airline representatives. To get a range of airports for our site visits, we made our selections based on the size of airport, type of checked baggage screening systems installed, and status of airport facility modification completion. We also considered recommendations from TSA and industry association officials about which airports to visit. Because we selected a nonprobability sample of airports, the information we obtained from these interviews and visits cannot be generalized to all airports. However, we believe that observations obtained from these visits provided us with a greater understanding of the airport officials’ perspectives. On these site visits, we interviewed airport, airline, and TSA officials responsible for financing, operating, and installing the checked baggage systems within their respective airports. We conducted this performance audit from October 2010 through April 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. TSA, through its EBSP, has deployed EDS and ETD machines in a variety of in-line and stand-alone configurations at airports to streamline airports and TSA operations, reduce screening costs, and enhance security. The following three tables provide information on the status of checked baggage screening systems. Table 3 highlights the different system configurations for airports that have optimal checked baggage systems by airport category. Table 4 shows the number of TSA-regulated airports with at least one in-line system. Table 5 provides the numbers of in-line systems and stand-alone EDS and ETD machines at TSA- regulated airports. In determining that TSA’s processes for developing EBSP cost estimates do not fully comply with best practices, we evaluated TSA’s cost estimation methods against our 2009 Cost Estimating and Assessment Guide. (See table 6.) We applied the following scale across the four categories of best practices: Not met: TSA provided no evidence that satisfies any portion of the criterion. Minimally met: TSA provided evidence that satisfies less than one-half of the criterion. Partially met: TSA provided evidence that satisfies about one-half of the criterion. Substantially met: TSA provided evidence that satisfies more than one-half of the criterion. Met: TSA provided complete evidence that satisfies the entire criterion. In addition to the contact named above, Glenn Davis, Assistant Director, and Daniel Rodriguez, Analyst-in-Charge, managed this assignment. Wendy Dye, Daren Sweeney, and Yee Wong made major contributions to the planning and all other aspects of the work. David Alexander and Richard Hung assisted with design, methodology, and data reliability. Chuck Bausell and Jack Wang assisted with economic analysis. Jason Lee, Stacey Steele, and Karen Richey assisted with life cycle cost estimate analysis. Nathan Tranquilli assisted with acquisition and contracting issues. Linda Miller provided assistance in report preparation. Thomas Lombardi provided legal support. | TSAs EBSP, one of DHSs largest acquisition programs, aims to improve security and lower program life cycle costs by optimizing checked baggage screening systems that best meet the needs of the nations airports. This includes, among other things, the integration of baggage screening equipment into baggage handling systems, referred to as in-line systems. Installing in-line systems typically requires airports to undertake costly facility modification projects, for which TSA will generally reimburse up to the applicable federal cost share. As requested, GAO examined (1) the status of TSAs efforts to install optimal checked baggage screening systems in collaboration with airports, (2) how reducing the federal cost share for eligible airport modification projects from 90 percent to its previous level of 75 percent would affect the amount that TSA pays for modifications, and (3) whether TSAs methods for estimating and validating costs for the EBSP are consistent with best practices. GAO reviewed EBSP planning and status documents, compared TSAs cost estimation approach against GAO best practices, and visited 10 airports selected in part based on the status of the EBSP optimization at these airports. Although the results from these visits are not generalizable, they provided insights into the program. The Transportation Security Administrations (TSA) Electronic Baggage Screening Program (EBSP) reports that 76 percent of the airports (337 of 446) the agency regulates for security have a mix of in-line and stand-alone baggage screening configurations that best meet airport needs (i.e., optimal systems). However, only 36 percent (10 of 28) of the nations larger airportsbased on factors such as the total number of takeoffs and landings annuallyhave complete optimal systems. This is because the larger airports generally need more complex in-line systems and often require a significant amount of airport infrastructure modification and construction. In August 2011, TSA shifted its focus from installing optimal baggage screening systems to replacing aging machines (recapitalization). However, TSA plans to continue to optimize systems during many of its recapitalization projects. Using TSA cost estimates, GAO estimates that reducing the portion of costs that TSA pays for facility modifications associated with the installation of optimal baggage screening systems, from 90 percent to 75 percent, would lower the federal governments cost for airport modification projects it supports by roughly $300 million from fiscal year 2012 through fiscal year 2030. Officials from all 10 airports with whom GAO spoke stated that airports benefit from the installation of integrated, in-line baggage screening systems. The primary benefitcited by representatives from 9 of the airports GAO visitedis that passenger congestion is reduced by removing stand-alone machines from lobbies or ticketing areas. Other benefits cited by airports included a reduction in lost baggage and increased screening and passenger throughput. However, for a variety of reasons, representatives from 8 of 10 airports GAO visited opposed a reduction in the federal cost share for related airport modifications. TSA established cost estimates for the EBSP to help identify total program cost, recapitalization cost, and potential savings resulting from installing optimal systems, but its processes for developing these estimates do not fully comply with best practices. These include, among other things, ensuring that the estimates comprise all costs and are well documented. For example, TSAs estimates were properly adjusted for inflation and were developed using relevant data, such as existing contracts for equipment purchases and maintenance costs. However, the estimates did not include all costs, for example, the costs associated with detecting all security threats, and many assumptions and methodologies underlying the cost model were not clearly documented. As highlighted in GAOs past work, a high-quality, reliable cost estimation process provides a sound basis for making accurate and well-informed decisions about resource investments and budgets and thus is critical to the success of a program. Developing accurate cost estimates would help ensure that the program does not experience unanticipated cost growth and program funding needs resulting from future recapitalization and facility modification activities. In addition, TSA is working with the Department of Homeland Security (DHS) to develop an approved acquisition program baseline, which according to DHS guidance is the contract between program and departmental oversight officials for what will be delivered, how it will perform, and what it will cost. TSA expects the baseline to be approved in May 2012. GAO recommends that TSA ensure that its life cycle cost estimates conform to cost estimating best practices. DHS concurred with GAOs recommendation. |
Cable television emerged in the late 1940s to fill a need for television service in areas with poor over-the-air reception, such as in mountainous or remote areas. By the late 1970s, cable began to compete more directly with free over-the-air television by providing new networks—available only on cable systems—such as HBO (introduced in 1972), Showtime (introduced in 1976), and ESPN (introduced in 1979). According to FCC, cable’s penetration rate—as a percent of television households—increased from 14 percent in 1975 to 24 percent in 1980 and to 65 percent by 2002. Cable television is by far the largest segment of the subscription video market, a market that includes cable television, satellite service (direct broadcast satellite (DBS) providers such as DirecTV), and other technologies that deliver video services to customers’ homes. Cable companies deliver video programming to customers through cable systems. These systems consist of headends—facilities where programming from broadcast and cable networks is aggregated—and distribution facilities—the wires that carry the programming from the headend to customers’ homes. Depending on the size of the community, a single headend can serve multiple communities or several headends may be required to serve a single large community. At the community level, cable companies obtain a franchise license under agreed-upon terms and conditions from a franchising authority, such as a township or county. In some cases, state public service commissions are also involved in cable regulation. During cable’s early years, franchising authorities regulated many aspects of cable television service, including franchise terms and conditions and subscriber rates. In 1984, the Congress passed The Cable Communications Policy Act, which imposed some limitations on franchising authorities’ regulation of rates. However, 8 years later, in response to increasing rates, the Congress passed The Cable Television Consumer Protection and Competition Act of 1992. The 1992 act required FCC to establish regulations ensuring reasonable rates for basic service—the lowest level of cable service that includes the broadcast networks—unless a cable system has been found to be subject to effective competition, which the act defined. The act also gave FCC authority to regulate any unreasonable rates for upper tiers (often referred to as expanded-basic service), which includes cable programming provided over and above that provided on the basic tier. Expanded-basic service typically includes such popular cable networks as USA Network, ESPN, CNN, and so forth. In anticipation of growing competition from satellite and wire-based providers, the Telecommunications Act of 1996 phased out all regulation of expanded- basic service rates by March 31, 1999. However, franchising authorities retain the right to regulate basic cable rates in cases where no effective competition has been found to exist. As required by the 1992 act, FCC annually reports on cable rates for systems found to have effective competition compared to systems without effective competition. To fulfill this mandate, FCC annually surveys cable franchises regarding their cable rates. In 2002, the survey included questions about a range of cable issues including the percentage of subscribers purchasing non-video services and the specifics of the programming channels offered on each tier to better understand the cable industry. Until recently, cable companies usually encountered limited competition in their franchise areas. Some franchise agreements were initially established on an exclusive basis, thereby preventing wire-based competition to the incumbent cable provider. In 1992, the Congress prohibited the awarding of exclusive franchises, and in 1996, the Congress took steps to allow telephone companies and electric companies to enter the video market. Still, only limited wire-based competition has emerged, in part because it takes large capital expenditures to construct a cable system. However, competition from DBS has grown rapidly in recent years. Initially unveiled in 1994, DBS served over 18 million American households by June 2002. Today, two of the five largest subscription video service providers are DirecTV and EchoStar, the two primary DBS companies. In a recently released report, we found that competition in the subscription video market can have a significant impact on cable rates. Using an econometric model, we found that franchise areas with a second wire-based video provider had rates approximately 17 percent lower than similar franchise areas without such a competitor. We did not, however, find that competition from DBS providers is associated with lower cable prices, although we did find that where DBS companies provide local broadcast networks to their customers, cable companies provide more channels than in areas where DBS companies do not provide local broadcast channels. Moreover, we also found that DBS providers obtain a substantially higher level of subscribers in areas where they are providing local broadcast channels. FCC’s annual cable rate survey seeks information on cable franchises’ cost changes that may underlie changes in cable rates during the preceding year. To evaluate the reliability of these statistics, we asked 100 of the approximately 700 franchises that FCC surveyed in 2002 to describe how cost change information that they provided to FCC was calculated. Figure 1 shows the actual portion of the FCC survey which franchises completed to provide their cost change information. E. Programming Service Charges in Community In the following, the "basic cable service tier" or BST is the service tier that includes the retransmission of over-the-air broadcast signals and may include a few satellite or regional channels. A "cable programming service tier" or CPST is any other tier containing programming other than that on the BST, pay-per-channel, or pay-per-view. CPST1 refers to the major CPST and typically meets two criteria: It has the most channels and most subscribers among the CPST tiers (if more than one CPST is offered). Sometimes a "mini-tier" with considerably fewer channels has the most subscribers among the CPSTs. This mini-tier is considered CPST2 , whether or not it has the most subscribers. Monthly Charges for Programming Services 48 49 50 51 Year-to-date change in monthly charge on row 50 Monthly charge for BST Monthly charge for CPST1 Monthly charge for BST plus CPST1 (rows 48 + 49) For July 1, 2001 and July 1, 2002, allocate the change shown on row 51 by estimating the dollars and cents that each factor, below, contributed. The total of these factors (row 58) should equal the change on row 51. 52 License or copyright fees, existing programs 53 License or copyright fees, new programs 54 Headend or distribution facility investment 55 General inflation, not included elsewhere 56 Other cost changes (positive or negative) 57 Non-cost-related factors (positive or negative) 58 Total of rows 52-57 (must equal row 51) ----- ----- ----- ----- ----- ----- ----- Our discussions with cable franchises indicated considerable variation in how franchises completed this section of the 2002 FCC cable rates survey. Our preliminary observations indicate that there are two causes for the resulting variation: (1) there were insufficient instructions or examples on how the form was supposed to be completed, leading to confusion among cable operators regarding what to include for the different cost factors and how to calculate each of them; and (2) the requirement that the cost and non-cost factors sum to the reported annual rate increase caused many cable operators to adjust one or more of the cost factors, thereby resulting in data that might not provide an accurate assessment of the cost factors underlying cable rate increases. Lack of adequate instructions. Our interviews with 100 cable franchises indicate that the lack of specific guidance regarding the cost change section of the survey caused considerable confusion about how to fill out the form. Every franchise that we spoke with said it was unclear what FCC expected for at least one of the six factors (5 cost factors as well as a non- cost factor); 73 of the 100 franchises said that the instructions were insufficient. In particular, several cable representatives we interviewed noted that there were no instructions or examples to show how to calculate investment, what types of cost elements should go into the other costs category, and what FCC meant by non-cost factors. This lack of guidance created considerable variation in the approaches taken to develop the cost factors. Table 1 provides information on the approaches cable franchises used to complete the portion of the survey pertaining to cost and non-cost factors underlying rate changes. Requirement that factors sum to the reported annual rate change. Our survey of 100 cable franchises that responded to FCC’s 2002 cable rates survey indicated that a second source of confusion relates to the requirement that the sum of the underlying cost and non-cost factors (see fig. 1 lines 52-57) equal the change in the franchise’s cable rates (see fig. 1 line 51). This portion of FCC’s survey was originally designed during the 1990s when both basic and expanded-basic services were regulated. At that time, cable companies were required to justify any rate increases the cable company implemented based on cost increases that it had incurred during the year. An FCC official told us that the rate/cost factor portion of the form was designed to mirror a regulatory form that was used at that time to justify rate changes. When expanded-basic services were deregulated in March 31, 1999, FCC realized that cost factors would no longer necessarily equal the yearly rate change because companies were no longer required to tie rate changes to explicit cost factors for regulatory purposes. In the 1999 cable rates survey, FCC added the non-cost line in this section of the survey and continued to require that the cost factors and the non-cost factor sum to the reported annual rate change. FCC officials told us that cable operators could use the non-cost factor element to make up any difference (positive or negative) between their changes in costs and rates. However, based on our findings, it appears that this may not have been clearly communicated to cable franchises. We found that only 10 franchises took this approach and instead, most franchises told us that they chose to change their estimate of one or more of the cost factors. In most cases, cable representatives told us that this meant reducing other cost factors because most franchises told us that their actual annual cost increases for the year covered by the 2002 survey exceeded their rate change for expanded-basic service. In other words, most franchises—84 of the 100 franchises we spoke with—did not provide a complete or accurate accounting of their costs changes for the year. The following are some examples of how the franchises we surveyed chose to equalize the cost factors with the rate change. Fifteen franchises said they entered dollar values in the factors until the entire rate increase was justified and did not consider the remaining cost factors; Twenty franchises said they chose to adjust the dollar estimates in existing and/or new programming in order to balance costs and rates; Seven franchises said they chose to adjust the costs included for investment in order to balance costs and rates; Twenty-seven franchises said they chose to adjust the amount of their inflation estimate to ensure that costs and rates were in balance; Twenty-six franchises said they chose to adjust the other costs factor to ensure that costs and rate changes were in balance; and Four franchises said they adjusted more than one of the cost factors in order to balance costs and rates. For example, one franchise chose to adjust all of the factors by a uniform percentage in order to retain a constant ratio of cost increases. The 1992 Cable Act established three conditions for a finding of effective competition, and a fourth was added in the 1996 Act. Specifically, a finding of effective competition in a franchise area requires that FCC has found one of the following conditions to exist: Fewer than 30 percent of the households in the franchise area subscribe to cable service (low-penetration test). At least two companies unaffiliated with each other offer comparable video programming service (through a wire or wireless—e.g., DBS— service) to 50 percent or more of the households in the franchise area and at least 15 percent of the households take service other than from the largest company (competitive provider test). The franchising authority offers video programming service to at least 50 percent of the households in the franchise area (municipal test). A local telephone company or its affiliate (or any other company using the facilities of such carrier or its affiliate) offers video programming, by means other than direct broadcast satellite, that is comparable to that offered by the cable provider in the franchise area (LEC test). Franchising authorities have primary authority to regulate basic cable rates. However, these rates may only be regulated if the cable system is not facing effective competition. Under FCC rules, in the absence of a demonstration to the contrary, cable systems are presumed not to face effective competition. The cable operator bears the burden of demonstrating that it is facing effective competition. Once the presence of effective competition has been established, the franchising authority is no longer authorized to regulate basic cable rates. FCC does not independently update or revise an effective competition finding once it is made. An effective competition finding may be reversed if a franchising authority petitions to be recertified to regulate basic rates by demonstrating that effective competition no longer exists. However, such petitions are rare. Our preliminary review of the approximately 700 cable franchises that responded to FCC’s 2002 cable rates survey suggests that the agency’s lack of any updates or reexamination of the status of competition in franchise areas may lead to some classifications of the competitive status of franchises that do not reflect current conditions. For example: Forty-eight of the 86 franchises in the sample that FCC had classified as satisfying the low-penetration test for effective competition actually reported current information to FCC on their operations that appeared, based on our preliminary calculations, to indicate that current penetration rates are greater than the 30 percent threshold. Ten cable franchises appeared to have a penetration rate exceeding 70 percent—a full 40 percentage points above the legislated low-penetration threshold. Forty of the 262 franchises in the FCC survey that had been classified as having effective competition by FCC also reported that the franchising authority was currently regulating basic service rates. This would not be in accord with the statutory requirement. It is possible that such an inconsistency could occur because cable companies incorrectly completed FCC’s survey in some fashion. Although the survey form asks the cable franchise whether they face effective competition in the franchise area, those responses are not always consistent with information maintained by FCC regarding whether there has been an official finding of effective competition. When FCC’s information conflicts with the survey response, FCC overrides the answer provided by the cable franchise. We found that FCC staff overrode the survey responses on effective competition for 24 percent of all franchises in its 2002 survey. Also, we have searched for instances in which franchising authorities sought to have a finding of effective competition reversed. We found two instances in which FCC reversed a finding of effective competition. However, in one of these instances involving ten franchises in Delaware, some of the franchises appear to remain classified as having effective competition even though FCC reversed the position in 1999. In its 2002 Report on Cable Industry Prices, FCC acknowledges that the classification of the competitive status of some franchises may not reflect current conditions. Some franchises that face competition may not have filed a petition, and therefore are not classified as facing effective competition. Also, some franchises may have previously met the criteria for a finding of effective competition, but because of changing circumstances may not currently meet the criteria and remain classified as facing effective competition. We are conducting additional work on the issues discussed today and a more complete analysis will be included in our final report, which we plan to issue in October 2003. In addition to the topics discussed today, we will be providing a more comprehensive analysis of the factors underlying recent cable rate increases, the impact of competition on cable rates and service, and cable tiering issues. Mr. Chairman, this concludes my prepared remarks. We would be pleased to answer any questions you or other members of the Committee may have. For questions regarding this testimony, please contact William B. Shear on (202) 512-4325 or at shearw@gao.gov. Individuals making key contributions to this testimony included Amy Abramowitz, Mike Clements, Keith Cunningham, Michele Fejfar, Wendy Turenne, Mindi Weisenbloom, and Carrie Wilks. | Over 65 percent of American households currently subscribe to cable television service. There has been increasing concern that cable television rates have been rising aster than the rate of inflation for the last few years. As required, on a yearly basis, FCC prepares a report on cable rates in areas that face and those that do not face effective competition--a term defined by statute. For information used in this report, FCC maintains information on the competitive status of cable franchises and annually surveys a sample of cable franchises. GAO examined (1) the reliability of information that cable companies provided to FCC in its annual survey regarding cost factors underlying cable rate increases and (2) FCC's process for updating and revising cable franchise classifications as to whether they face effective competition. Based on interviews with 100 randomly sampled cable franchises that completed FCC's 2002 survey, GAO's preliminary analysis indicates that FCC's survey may not be a reliable source of information on the cost factors underlying cable rate increases. Because of the following problems, GAO found that there are inconsistencies in how companies completed the survey. FCC provided minimal instructions or examples on how the portion of the survey covering the cost factors underlying rate increases should be completed. It appears that cable companies made varying assumptions on how to complete the survey. FCC's survey required that cable companies fully allocate their reported annual rate increase to various cost and non-cost factors. Our preliminary findings indicate that there was inadequate guidance on how to achieve this requisite balance, and cable companies approached the question in varying ways. Based on preliminary work, GAO found that FCC's classification of cable franchises as to whether they face effective competition might not accurately reflect current conditions. GAO found instances where information in the survey responses of some franchises would suggest that the criteria for an effective competition finding that was made in the past might no longer be present. However, a finding of effective competition is only changed if a formal process is instituted. GAO found only two instances where a petition was filed that resulted in a reversal of an effective competition finding. |
Medicare is the nation’s largest single payer for health care. In 1995, it spent an estimated $177 billion, or 12 percent of the federal budget, on behalf of more than 37 million elderly and disabled people. The Congressional Budget Office (CBO) projects that, under current program law, program spending will almost double in the next 6 years to an estimated $332 billion by 2002. Approximately 90 percent of Medicare beneficiaries obtained services on an unrestricted fee-for-service basis; that is, patients chose their own physicians or other health care providers, with bills sent to the program for payment. This set-up mirrored the nation’s private health insurance indemnity plans, which prevailed until the 1980s. Since then, many changes have taken place in the financing and delivery of health care. Large health care purchasers have used leverage on hospitals and other providers to obtain lower prices. Private payers, including large employers, use an aggressive management approach to control health care costs. HCFA is Medicare’s health care buyer. HCFA’s pricing of services and controls over utilization have been carefully prescribed by interrelated statute, regulation, and agency policy. HCFA contracts with about 70 companies—such as Blue Cross and Aetna—to handle claims screening and processing and to audit providers. Each of these commercial contractors works with its local medical community to set coverage policies and payment controls in addition to those that have been established nationally by HCFA. As a result, billing problems involving waste, fraud, and abuse are handled, for the most part, at the contractor level. This arrangement was prompted when the program was established in the mid-1960s by concerns that the federal government, which lacked extensive claims processing expertise and experience, would prove incapable of providing service comparable to that of private insurers. means that insurance companies responsible for reviewing the appropriateness of Medicare claims are also, through the medical networks they own, billing the program. At a time when the volume of Medicare claims has exceeded 800 million a year, Medicare is being billed increasingly by entrepreneurial entities rather than by medical professionals. Although growth rates for inpatient hospital and physician services have moderated since the 1980s, Medicare spending remains high. Combined spending for these services amounted in 1994 to $120 billion—nearly three-fourths of total Medicare spending. The sheer size of these categories means that each percentage point of growth represents hundreds of millions of dollars. Smaller categories of services, however, have displayed much more rapid growth through the 1990s, helping to drive total Medicare spending to double-digit inflation. Home health agency (HHA) and skilled nursing facility (SNF) services each grew at an average annual rate of 28 percent from 1990 through 1996. Private insurers and employer purchasers have sought to stem such health cost escalation by shifting from their role as passive payers to become more prudent managers of health care costs. Some 90 percent of health plans—from fee-for-service to managed care—actively manage costs through price competition and negotiation and utilization monitoring techniques. By contrast, Medicare’s reimbursement policies and claims payment activities have not been adapted to the contemporary marketplace and today’s demands for fiscal discipline in public programs. The home health and SNF spending categories, in particular, illustrate the damaging effects of reimbursement policies that fail to incorporate effective pricing and utilization management techniques. In the case of home health services, for example, Medicare pays HHAs on the basis of costs but uses few tools to determine whether the costs are reasonable. Also, physicians are not required to see the patients for whom they sign plans of care and are not held accountable if they approve inappropriate levels of service. Medicare does not require HHAs to provide beneficiaries or physicians with information on the home health services billed on their, or their patients’, behalf. The Medicare contractors, moreover, pay 97 percent or more of home health claims without review.Even when reviews are done, Medicare claims processing contractors rarely visit HHAs or beneficiaries to verify the actual and appropriate provision of services. One consequence of such neglect is the escalation of visits per Medicare beneficiary, which rose an average of about 20 percent a year from 1989 to 1994. In July 1995 we reported that the largest privately held HHA in the United States, which was being investigated for fraud, obtained 95 percent of its total revenues from Medicare. Current and former employees told us medical records were altered and forged to ensure continued or prolonged home health care visits. Services were provided to patients who were not homebound—for example, one who routinely drove a vehicle to go grocery shopping and one who walked a few blocks alone daily to eat at the local senior citizens’ center. services provided across geographic areas and provider types. For example, in 1993 patients in southeastern states received on average more than twice as many visits as patients in northwestern states. Furthermore, diabetics received an average of about twice as many visits from proprietary HHAs as from voluntary or government-run agencies. Skilled nursing facilities represent another area in which Medicare’s unguarded reimbursement policies have been exploited. In this setting, a population with extensive health care needs grouped together at a single location offers unscrupulous providers the opportunity for volume billing, and Medicare often does not look for warnings of egregious overutilization or rapid increases in billings. Under Medicare’s provisions for reimbursement, providers can bill Medicare directly, without the SNF or attending physician affirming whether the items were necessary or provided as claimed. In other words, medical equipment suppliers, providers of rehabilitation therapy, and providers of X rays and other diagnostic tests can determine levels of services and bill Medicare with little or no oversight. In addition, Medicare’s automated systems do not capture data in a way that would practically allow them to flag indications of improbably high charges or levels of services at individual facilities. This is in part because the data are not organized to report which beneficiaries are in nursing homes. In January of this year we reported that a wide array of provider types—including physicians, optometrists, psychiatrists, laboratories, and medical equipment suppliers—have fraudulently or otherwise inappropriately billed Medicare for services and supplies furnished to nursing facility residents. The wrongdoing has generally focused on billing Medicare for unnecessary or undelivered services, or misrepresenting a service to obtain reimbursement. The investigations we reviewed probed activities in over 40 states, with many providers operating in multiple states. reasonable. This is particularly pertinent to rehabilitation therapy, services that account for 30 percent of SNF costs. Specifically, Medicare places no absolute dollar limits on reimbursements for occupational or speech therapy, and charges for therapy services are not linked through billing codes to the amount of time spent with patients or the treatment provided. In other words, Medicare has no easy way to limit the amount it will pay for occupational or speech therapy or to determine whether a charge is for 15, 30, or 60 minutes of treatment. Absent any benchmarks, and with limited resources available for auditing, it is largely infeasible for Medicare contractors to judge whether therapy providers have overstated their costs. Last year we reported that Medicare had been charged as much as $600 for an hour of therapy services. HCFA has acknowledged the problem and recently estimated that implementing salary equivalency guidelines for speech and occupational therapy, in conjunction with adjusting other salary guidelines, could save $1.4 billion over the next 6 years. To date, however, the salary guidelines have not been established. Although occupational therapists in SNFs earn on average $23 per hour, we recently found in one contractor’s files that more than 25 percent of submitted charges for one unit (undefined) of occupational therapy exceeded $195, and some approached $1,500 per unit. Under Medicare rules for reimbursing SNFs, the problem of overpaying for rehabilitation therapy services becomes compounded. That is, Medicare pays SNFs a portion of their overhead expenses, based on the percentage of their total Medicare-related business. The higher the Medicare-related payments to rehabilitation agencies (or other outside contractors), the more Medicare business an SNF can claim, and the higher the percentage of its overhead that can be charged to the program. Further, as noted by the Prospective Payment Assessment Commission (PROPAC), SNFs may cite high use of ancillary services, such as therapy, to justify an exemption from routine service cost limits, thereby increasing their payments for routine (bed, board, nursing) services. Allowing payment problems to continue unchecked results in billions of dollars of unnecessary spending. HCFA has been aware of the rehabilitation therapy overcharging problem since 1990. In 1993 HCFA began studies to develop averages for therapists’ salaries. Its most recent analysis is expected to be completed some time this summer. Given the usual time involved in the federal notification and publication requirements for changing Medicare prices, salary equivalency guidelines—which are key to Medicare’s determination of reasonable costs—are unlikely to be implemented before the middle of 1997 at the earliest. This situation is consistent with HCFA’s past experience of taking years to adjust excessively high payment rates. It took almost 3 years to lower the price of an item it paid up to four times more for than consumers paid at the local drug store. HCFA can adjust prices that are inherently unreasonable, but its authority to do so is very limited and involves a complex set of procedures that take a long time to complete. Because of the time and resources involved, HCFA only occasionally uses this process. In an August 1995 report, we showed that Medicare paid higher than the retail prices for 44 types of surgical dressings. Under the Omnibus Budget Reconciliation Act (OBRA) of 1987, however, even the unwieldy inherent reasonableness authority to change these prices was effectively eliminated. Before 1987, individual Medicare contractors had the authority to adjust prices to reflect local market conditions using a publication and notification process that could be completed in less than 90 days. In a letter to a congressional subcommittee, the HHS Inspector General last year characterized as “absurd” the situation limiting HCFA’s ability to make timely adjustments to payment levels. Because of strict statutory constraints and its own burdensome regulatory and administrative procedures, HCFA is slow to address overpricing and overutilization problems. As we reported to the Congress last September, many of the tools Medicare’s contractors use to manage their commercial insurance plans are not authorized for use in the Medicare program. In stark contrast to private payers, HCFA and its contractors generally cannot use such utilization controls as prior approval or case management to coordinate and monitor expensive services and specialist care; encourage the use of “preferred providers”—those who meet utilization, price, and quality standards; or negotiate with select providers for discounts, promptly change prices to match those available in the market, or competitively bid prices. Not surprisingly, Medicare’s ability to emphasize cost efficiency in its dealings with suppliers, physicians, and institutions that habitually provide excessive services is limited, and for certain services Medicare pays higher prices than its private sector counterparts. (See app. I for details on commonly used private sector strategies and their applicability to Medicare. See also chapter 11 of the Physician Payment Review Commission’s (PPRC) 1996 Annual Report to Congress.) The recognition that Medicare needs to change its role from largely a claims processor to prudent manager is beginning to take shape in HCFA itself as well as in pending legislation passed by the House of Representatives last month. For example, HCFA has planned, among several new initiatives, a demonstration testing the concept of competitive bidding for certain supplies, such as oxygen, hospital beds, and urological and incontinence products; an improvement on earlier case management experiments by which primary care physicians would, for example, provide comprehensive management for beneficiaries with specific diagnoses, such as diabetes, hypertension, or congestive heart failure, for which Medicare would reimburse them with a bundled, capitated payment as is currently done on a monthly basis for end-stage renal disease patients; and a demonstration in selected locations that allows beneficiaries to join preferred provider organization health plans, which are not currently available under Medicare. program dollars. In particular, HCFA would have the authority to contract directly with companies specializing in utilization review and fraud detection to monitor and adjudicate claims. In essence, HCFA could contract with the companies best suited to perform medical, utilization, and fraud reviews; audit cost reports; revisit payment decisions and recover overpayments; provide education on payment integrity and benefit quality assurance issues; and provide more specific guidance on coverage of medical equipment and supplies. Increased flexibility and an accompanying assured funding stream, such as that proposed in this legislation, would significantly enhance HCFA’s ability to curb overutilization and inappropriate billings. Despite these initiatives, however, important tools would still be unavailable to the Medicare program. For example, HCFA uses profiling—that is, statistical analyses—to identify “outlier” providers whose practice patterns differ markedly from those of their peers. While the private sector is free to use profiling results to provide financial rewards or penalties (in the form of exclusion from preferred provider networks), HCFA lacks the authority to do so. In addition, HCFA and its contractors have no viable statutory authority to require prior approval of select procedures. Most important, HCFA does not have the authority needed to promptly correct overpricing problems. The problems facing Medicare confront private insurers as well, but they are equipped with a larger and more versatile inventory of health care management strategies than HCFA currently has. These strategies may not be deployable in every aspect, but in general they suggest ways to make Medicare more cost effective. Commercial contractors, which play a key role in administering Medicare, routinely employ management-of-care approaches in their capacity as private insurers. If they applied similar approaches to Medicare, the government might be able to avoid spending substantial sums unnecessarily. 1. The Congress should enact funding and contractor reform provisions similar to those contained in H.R. 3103. Such reforms would give HCFA the flexibility to hire the private sector expertise necessary to apply the best health cost management practices. 2. HCFA needs to target Medicare’s high-cost, high-utilization areas for running demonstrations to apply such strategies as the use of case management and companies specializing in utilization review. For example, HCFA could identify, as the focus of the demonstrations, geographic areas with particularly high home health or SNF costs per Medicare beneficiary. 3. The Congress should give HHS the flexibility to make prompt adjustments to fee schedules when overpriced services and supplies are identified. For example, Medicare should be able to reduce fee schedule prices for surgical supplies within 90 days, similar to what was customary before OBRA 1987. We have included as appendix II a list of GAO recommendations recently made to correct specific Medicare payment problems. Mr. Chairman, this concludes my statement. I will be pleased to answer any questions. For more information on this testimony, please call Edwin P. Stropko, Associate Director, at (202) 512-7119. Other major contributors included Audrey Clayton, Patricia Davis, Hannah Fein, and Barry Tice. HCFA concerned about adaptability and relevance to Medicare (Table notes on next page) Cited below are our recommendations and matters for congressional consideration addressing specific reimbursement system and payment control problems. The emphasis of Medicare’s home health benefit program has recently shifted from primarily posthospital acute care to more long-term care. At the same time, HCFA’s ability to manage the program has been severely weakened by coverage changes mandated by court decisions and a decrease in the funds available to review HHAs and the care they provide. The Congress may wish to consider whether the Medicare home health benefit should continue to become more of a long-term care benefit or if it should be limited primarily to a posthospital acute care benefit. The Congress should also consider providing additional resources so that controls against abuse of the home health benefit can be better enforced. To curtail the practice of giving providers unauthorized access to beneficiary medical records, the Congress should authorize HHS OIG to establish monetary penalties that could be assessed against nursing facilities that disclose information from patients’ medical records not in accord with existing federal regulation. We recommend that the Secretary of HHS direct the Administrator of HCFA to establish, for procedure billing codes by provider or beneficiary, thresholds for unreasonable cumulative levels or rates of increase in services and charges, and to require Medicare carriers to implement automated screens that would suspend for further review claims exceeding those thresholds and undertake demonstration projects designed to assess the relative costs and benefits of alternative ways to reimburse nursing facilities for part B services and supplies; these alternatives should include such options as unified billing by the nursing facility and some form of capped payment. We recommend that the Secretary of HHS direct the HCFA Administrator to develop policies and revise practices so that Medicare can (1) price services and procedures more competitively, (2) manage payments through state-of-the-art data analysis methods and use of technology, and (3) better scrutinize the credentials of vendors seeking to bill the program; examine the feasibility of allowing Medicare’s commercial contractors to adopt for their Medicare business such managed care features as preferred provider networks, case management, and enhanced utilization review; and seek the authority necessary from the Congress to carry out these activities. Given the urgency for expediting Medicare program changes that could lead to substantial savings, the Congress may wish to consider directing the Secretary of HHS to develop a proposal seeking the necessary legislative relief that would allow Medicare to participate more fully in the competitive health care marketplace. Such relief could include allowing the Secretary of HHS to set maximum prices on the basis of market surveys, or, if the formal rulemaking process is preserved, allowing the Secretary to make an interim adjustment in fees while the studies and rulemaking take place. The Congress may also wish to consider options for granting relief from the funding declines in Medicare’s anti-fraud-and-abuse activities. The Secretary should direct the Administrator of HCFA to require that bills submitted to fiscal intermediaries itemize supplies; develop and implement prepayment review policies as part of the process of implementing any new or expanded Medicare coverage; and establish procedures to prevent duplicate payments by fiscal intermediaries and carriers. The fee-schedule approach to setting prices provides a good starting point for setting appropriate Medicare prices. HCFA, however, needs greater authority and flexibility to quickly adjust fee-schedule prices when market conditions warrant such changes. To allow Medicare to take advantage of competitive prices, the Congress should consider authorizing HCFA or its carriers to promptly modify prices for durable medical equipment and other medical supplies. For this to work effectively, however, HCFA or the carriers must devote adequate resources to routine price monitoring. The Secretary should direct the Administrator of HCFA to (1) set explicit limits to ensure that Medicare pays no more for therapy services than would any prudent purchaser; (2) strengthen certification requirements to better ensure that those entities billing Medicare are accountable for the services provided to beneficiaries; and (3) define billable therapy service units so they relate to the time spent with the patient. Medicare: Home Health Utilization Expands While Program Controls Deteriorate (GAO/HEHS-96-16, Mar. 27, 1996). Fraud and Abuse: Providers Target Medicare Patients in Nursing Facilities (GAO/HEHS-96-18, Jan. 24, 1996). Fraud and Abuse: Medicare Continues to be Vulnerable to Exploitation by Unscrupulous Providers (GAO/T-HEHS-96-7, Nov. 2, 1995). Medicare Spending: Modern Management Strategies Needed to Curb Billions in Unnecessary Payments (GAO/HEHS-95-210, Sept. 19, 1995). Medicare: Antifraud Technology Offers Significant Opportunity to Reduce Health Care Fraud (GAO/AIMD-95-77, Aug. 11, 1995). Medicare: Excessive Payments for Medical Supplies Continue Despite Improvements (GAO/HEHS-95-171, Aug. 8, 1995). Medicare: Adapting Private Sector Techniques Could Curb Losses to Fraud and Abuse (GAO/T-HEHS-95-211, July 19, 1995). Medicare: Allegations Against ABC Home Health Care (GAO/OSI-95-17, July 19, 1995). Medicare: Modern Management Strategies Needed to Curb Program Exploitation (GAO/T-HEHS-95-183, June 15, 1995). Medicare: Tighter Rules Needed to Curtail Overcharges for Therapy in Nursing Homes (GAO/HEHS-95-23, Mar. 30, 1995). High-Risk Series: Medicare Claims (GAO/HR-95-8, Feb. 1995). Medicare: Inadequate Review of Claims Payments Limits Ability to Control Spending (GAO/HEHS-94-42, Apr. 28, 1994). Health Care Reform: How Proposals Address Fraud and Abuse (GAO/T-HEHS-94-124, Mar. 17, 1994). Medicare: Greater Investment in Claims Review Would Save Millions (GAO/HEHS-94-35, Mar. 2, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed strategies to curb Medicare spending, which has grown by over 10 percent a year since 1989, twice the rate of the national economy. GAO noted that: (1) Medicare has not used tools used by private health care payers to manage and improve its utilization, reimbursement, and claims policies and procedures; (2) Medicare's smaller categories of services, which are typically less managed and monitored, have displayed much higher growth than its larger categories of services; (3) the Health Care Financing Administration (HCFA) has been slow to address overpricing and overutilization problems, sometimes taking years to adjust excessively high payment rates; (4) strict statutory constraints and its own burdensome regulatory and administrative procedures hinder HCFA from using such private-sector management tools as case management, preferred providers, or discount negotiation; (5) in an effort to change its role from claims processor to prudent manager, HCFA has initiated demonstrations to explore its use of competitive bidding for certain supplies, case management, and preferred providers; and (6) proposed legislation could give HCFA the funding and flexibility it needs to better manage its contractors and services. |
As shown in figure 1, the CT Office has evolved over the last two decades. In December 2010, the QDDR recommended the creation of the CT Bureau to supersede the CT Office.numerous other offices, was attached to State’s Office of the Secretary, which provided management and administrative support to the CT Office. The CT Office relied on the Secretary of State’s Office of the Executive Secretariat for functions such as budgeting and human resources. According to a report by State’s Office of Inspector General in 2012, the The CT Office, along with CT Office’s needs for management support services, such as human resources, could not be met in a timely and efficient fashion because the Office of the Executive Secretariat had other responsibilities in addition to providing administrative support to the CT Office. According to State, one reason for elevating the CT Office to a bureau was that the office’s responsibilities for counterterrorism strategy, policy, operations, and programs had grown far beyond the original coordinating mission. The QDDR stated that the new CT Bureau, when established, would build on and expand the CT Office’s activities in three areas; it would (1) play a key role in State’s efforts to counter violent extremism, (2) strengthen State’s ability to assist foreign partners as they build their own counterterrorism capabilities, and (3) engage in multilateral and bilateral diplomacy to advance U.S. counterterrorism goals. In addition, the QDDR mentioned that the bureau status would allow for more effective coordination with other agencies, including the Departments of Defense, Homeland Security, and Justice, and with the Intelligence Community. Figure 2 shows the organizational structure of the CT Office in 2011. As the new CT Bureau began organizing itself, State’s Office of Inspector General (OIG) conducted an inspection of the CT Bureau in early 2012. The OIG recognized that the inspection took place as the bureau was implementing internal reorganization and reported that this process entailed creating an executive office, adding new staff to develop capabilities, and shifting staff around, among other things. In its report, the OIG stated that the CT Bureau’s intended goals included improving communications and coordination through measures such as integrating policy and program staff and creating more efficient and transparent flows of information, including through a new tasking and tracking system. The OIG made 13 formal recommendations addressing issues such as staffing, training, and reorganization; 10 of the recommendations had been closed as implemented as of June 2015, according to an OIG official. Since transitioning to a bureau in January 2012, the CT Bureau updated its mission statement to focus on partnerships and building the capacity of partner nations to counter terrorism. The current mission of the CT Bureau is to promote U.S. national security by taking a leading role in developing coordinated strategies and approaches to defeat terrorism abroad and securing the counterterrorism cooperation of international partners. The CT Bureau identified five principal areas of responsibility: (1) U.S. counterterrorism strategy and operations, (2) countering violent extremism, (3) homeland security coordination, (4) capacity building, and (5) counterterrorism diplomacy. According to CT Bureau officials, these responsibilities are reflected in the types of programming that the bureau carries out. Like its predecessor office, the CT Bureau manages a range of programs, initiatives, and activities to combat terrorism around the world. According to the CT Bureau, it manages and oversees six key programs: Antiterrorism Assistance: implemented by the Department of State’s Bureau of Diplomatic Security, helps partner nations build capacity across a wide spectrum of counterterrorism law enforcement skills, offering training, equipment, mentoring, and technical assistance. Countering Violent Extremism (CVE): seeks to deny terrorism new recruits by reducing sympathy and support for violent extremism. The program supports targeted counter-recruitment interventions for at- risk communities in priority countries and aims to build resilience against violent extremist narratives. It also builds the capacity of partner nations and civil society organizations to counter violent extremism. Counterterrorism Engagement: builds the capacity of multilateral organizations and regional bodies, including the Global Counterterrorism Forum, to promote counterterrorism cooperation and best practices. Counterterrorism Finance (CTF): assists partner nations to build and strengthen effective legal frameworks and regulatory regimes, establish active and capable Financial Intelligence Units, strengthen the investigative skills of law enforcement entities, and bolster prosecutorial and judicial development. Regional Strategic Initiative: fosters regional cooperation and deepens partnerships to address top-priority terrorism challenges. Terrorist Interdiction Program: provides partner nations with biometrics technology and training to identify, disrupt, and deter terrorist travel at airports and other major ports of entry. In addition, the CT Bureau manages or is responsible for other counterterrorism-related efforts, which are described in appendix II. In the transition from CT Office to CT Bureau in 2012, some organizational changes occurred, such as a reduction from five to four Deputy Coordinators. The creation of the bureau elevated the role of strategic planning and metrics, and established a new policy and guidance unit to ensure that all CT programs and activities, to include counterterrorism programs implemented with foreign partners, conform to law and policy and reflect the counterterrorism priorities of the Secretary of State and the Coordinator for Counterterrorism. Additional changes to the CT Bureau’s organizational structure occurred starting in 2014, following the confirmation of an ambassador as the Coordinator of the bureau.Coordinator initiated a strategic review of the CT Bureau’s programs and what they were accomplishing to help form a clear picture of priorities, threats, and where the bureau’s efforts and funding should be directed. The strategic review led to several key changes in the bureau’s structure, and the CT Bureau’s overall programmatic focus shifted to a regional or geographic approach. The function of program monitoring, oversight, and evaluation was elevated, and a separate Office of Programs was created to monitor the day-to-day activities of counterterrorism programming at the program management level. Figure 3 depicts the new organizational chart of the CT Bureau as of June 2015. The CT Bureau is allocated funding for (1) foreign assistance programming that the bureau oversees as well as (2) the operations of the bureau. The foreign assistance allocations to the CT Bureau fund counterterrorism-related programs that the bureau oversees. As shown in figure 4, from fiscal years 2011 through 2014, the CT Bureau was allocated a cumulative total of $539.1 million for six counterterrorism- related programs: (1) Antiterrorism Assistance, (2) Countering Violent Extremism, (3) Counterterrorism Engagement, (4) Counterterrorism Finance, (5) Regional Strategic Initiative, and (6) Terrorist Interdiction Program. The CT Bureau requested $104.4 million in allocations for fiscal year 2015 for these programs. State officials were unable to provide actual allocations for fiscal year 2015 because they were working to finalize them, as of June 2015. For funding allocations by program, fiscal year, and funding account, see appendix IV. When the CT Bureau became operational, in early 2012, its operations were no longer part of the executive management of the Office of the Secretary of State. The CT Bureau and the Office of the Secretary of State worked with State’s Bureau of Budgeting and Planning to determine the size of the budget that was needed to support the new CT Bureau, according to CT Bureau officials. The bureau’s allocated resources include funding for the operations of the bureau. The CT Bureau receives funds from two sources to fund its core operations: the Diplomatic and Consular Programs and the Worldwide Security Programs accounts. Figure 5 shows the bureau’s total allocations for its overall management and operations since fiscal year 2012. These allocations increased from $11.7 million in fiscal year 2012 to $14.7 million in fiscal year 2013, as the bureau was being established. The allocations then decreased to $13.1 million in fiscal year 2014. Appendix V provides information on the CT Bureau’s total obligations for its overall operations since fiscal year 2012. The CT Bureau’s number of authorized full-time equivalent (FTE) staff positions has grown annually since fiscal year 2011, and the bureau has recently undertaken efforts to reduce a persistent staffing gap. The bureau’s number of authorized FTEs grew from 66 in fiscal year 2011 to 96 in fiscal year 2015, which is an increase of more than 45 percent. Figure 6 shows the number of authorized FTEs within the bureau for fiscal years 2011 to 2015, along with the number of FTE positions that were filled. While the bureau’s current authorized level of FTEs for fiscal year 2015 is 96 positions, it had 22 vacancies as of October 31, 2014. The percentage of vacancies in the bureau has ranged from 17 percent to 23 percent in fiscal years 2011 to 2015. According to the CT Bureau, these vacancies have included both staff-level and management positions. For example, recent vacancies have included a management position in the Office of Homeland Security and program analyst positions in the Office of Programs. Since the staffing snapshot reflected in figure 6, the bureau has reported it has made efforts to fill vacancies. The Principal Deputy Coordinator for Counterterrorism testified before Congress on June 2, 2015 that the CT Bureau had reduced its FTE vacancies to 11 positions. However, we have been unable to verify that 4 of the reportedly filled positions have been filled because State has not provided sufficient documentation. In addition to the authorized FTEs, the CT Bureau also has non-FTE positions, which include contractors; interns; fellows; detailees; and “When Actually Employed,” the designation applied to retired State employees rehired under temporary part-time appointments. For fiscal years 2013, 2014, and 2015, respectively, the CT Bureau had 92, 78, and 69 such positions, in addition to its authorized FTEs, according to the CT Bureau. According to State, to meet the personnel requirements associated with standing up the CT Bureau, the bureau received an authorized increase According of 31 positions covering fiscal year 2012 to fiscal year 2014. to the CT Bureau, about 7 of these positions were initially filled within the first 6 months after the bureau was established. Filling many of the remaining positions was postponed until the current Coordinator for Counterterrorism had time to assess the bureau’s needs and priorities, according to the CT Bureau. CT Bureau officials stated that since fiscal year 2012, the authorized positions have been reallocated each year and moved around within the CT Bureau based on the bureau’s needs. When the Coordinator for Counterterrorism, following the strategic review, deemed that more staff might be needed in newly created units, some of the authorized positions were used for that purpose. For example, 1 position was used to fill a management-level position in the CT Bureau’s Office of Strategy, Plans, and Initiatives and another to fill a senior-level position in the Foreign Terrorist Fighters Unit, according to CT Bureau officials. For additional information regarding the CT Bureau’s workforce planning efforts, see appendix VI. The CT Bureau utilized various means to assess its performance, including performance assessments and program evaluations. Specifically, in fiscal years 2012 and 2013, the CT Bureau established indicators and targets for its foreign assistance–related goals identified in the bureau’s first multiyear strategic plan. The bureau also reported results achieved toward each established indicator. In addition, since being elevated to a bureau in fiscal year 2012, the CT Bureau has completed four evaluations of counterterrorism-related programs it oversees, but none have focused on CVE programming—a priority for the bureau. The completed evaluations resulted in 60 recommendations, and the CT Bureau reported having implemented about half of the recommendations as of June 2015. A standard practice in program management is to complete actions in response to recommendations from evaluations within established time frames. However, the CT Bureau has not established time frames for addressing the remaining recommendations. Without specific time frames for completing actions in response to recommendations from evaluations, it will be difficult for the bureau to ensure that needed programmatic improvements are made in a timely manner or to hold its implementing partners accountable for doing so. The CT Bureau assessed its progress toward achieving its foreign assistance–related goals in fiscal years 2012 and 2013, as required by State policy. That policy requires bureaus to respond to an annual department-wide data call for foreign assistance–related performance information. Specifically, bureaus must identify indicators and targets for their foreign assistance–related goals, as defined in their multiyear strategic plans, and report results achieved toward each indicator for the prior fiscal year.foreign assistance–related goals in its first multiyear strategic plan and As shown in figure 7, the CT Bureau identified four established quantitative indicators and corresponding targets for each of those goals. It also reported results achieved for each indicator. According to these results, the CT Bureau generally met or exceeded its targets or the baseline when no target existed. Since being elevated to a bureau in fiscal year 2012, the CT Bureau has completed four evaluations of counterterrorism-related programs it funds and oversees. The number of completed program evaluations meets the number of evaluations required by State’s February 2012 evaluation policy.during fiscal years 2013 and 2014 and focused primarily on evaluating programs providing training courses to law enforcement officials of partner nations, such as the Antiterrorism Assistance program in Morocco and Bangladesh. Standard practices in program management include, among other things, establishing specific time frames for addressing recommendations from program evaluations. For example, internal control standards for the federal government state that management should (1) promptly assess the findings and recommendations from evaluations, (2) determine and complete actions in response to the findings and recommendations from evaluations within established time frames, and (3) record actions taken on recommendations in a timely and accurate manner. State’s January 2015 evaluation guidance, which provides specific criteria and guidelines for evaluating State programs, also recognizes the need for bureaus to track and address recommendations from evaluations. The four program evaluations the CT Bureau completed during fiscal years 2013 and 2014 resulted in 60 recommendations. In response to questions during the course of our review, CT Bureau officials developed action plans to describe the status of efforts to address the 60 recommendations. On the basis of our review of these action plans, the CT Bureau reported having implemented about half of the recommendations (28 of 60) as of June 2015. The bureau had put on hold or decided not to implement 4 recommendations. The remaining 28 recommendations were still being considered or were in the process of being implemented, or the bureau had made a commitment to implement them. CT Bureau officials said that program officers are assigned responsibility for following up on recommendations that affect their portfolio but that the bureau does not have any policy or other guidance outlining the timing for addressing recommendations from evaluations. Further, according to bureau officials, the bureau does not have a system for assigning time frames for the implementation of recommendations. While the action plans are a positive first step to help the bureau monitor and track its progress in implementing recommendations, they do not address the need for the bureau to establish time frames for addressing recommendations from evaluations. Without specific time frames for completing actions in response to recommendations from evaluations, it will be difficult for the bureau to ensure that needed programmatic improvements are made in a timely manner or to hold its implementing partners accountable for doing so. We found that activities between the CT Bureau and other bureaus within State as well as with other U.S. government agencies on counterterrorism programs, specifically the CVE and CTF programs, were generally in line with six of the seven key practices that GAO has identified for interagency collaboration in the areas of (1) outcomes and accountability, (2) bridging organizational cultures, (3) leadership, (4) clarity of roles and responsibilities, (5) resources, and (6) written guidance and We did not review one additional key collaboration agreements.practice, which covers participants, because doing so would have required taking a comprehensive look across all the State bureaus and other U.S. government agencies to ensure that all the relevant participants in counterterrorism efforts were included and would have required an evaluation of their relevant resources, skills, and abilities to contribute, which was outside the scope of this review. Outcomes and accountability. GAO-12-1022. Having defined outcomes and mechanisms to track progress can help shape a collaborative vision and goals. defined intended outcomes generally as collaborating on policy and programming decisions, sharing information, and ensuring that there is no duplication of existing or planned initiatives. When working with other U.S. government agencies, the CT Bureau generally has laid out the intended outcomes of coordination efforts in interagency agreements. We found that within State, the goals of coordination may be articulated by the CT Bureau through specific requests across regional or functional bureaus or messages defining and assigning specific tasks. For example, some State officials in regional bureaus mentioned that the CT Bureau has asked for input on CVE programming, specifically on reviewing CVE grant proposals from posts and nongovernmental organizations for respective regions to ensure that the programming was feasible and did not conflict with other initiatives. Further, some State officials said that the CT Bureau reached out to them to request their expertise on identifying regional stakeholders that could provide input for the State portion of the February 2015 White House Summit on Countering Violent Extremism. With regard to coordination on CTF, we found that the interagency Terrorist Financing Working Group previously had provided a mechanism for the CT Bureau and other U.S. government agencies to hold regular meetings to discuss and reach consensus on CTF programming outcomes and goals. According to agency officials, interagency coordination on CTF programs occurred in the past mostly through the Terrorist Financing Working Group, which developed an annual list of priority countries based on an analysis by participating agencies, including various intelligence agencies. The working group was co-chaired by the former CTF unit within the CT Bureau and provided a mechanism for agency stakeholders to share information on CTF activities and find ways to avoid potential conflict between any initiatives. According to officials at other agencies, the Terrorist Financing Working Group has not met recently, and the CT Bureau is in the process of developing new interagency collaboration mechanisms for CTF programming, after the CTF unit within the bureau was disbanded during the recent reorganization of the bureau. Department of the Treasury officials stated that in the interim there is sufficient coordination between the bureau and stakeholders on CTF programming; however, they stated that if the Terrorist Financing Working Group remains on hiatus for some time, and no replacement mechanism for regular formal collaboration is initiated, stakeholders’ awareness of what other agencies are doing to counter terrorism financing could be hindered. We identified accountability mechanisms to monitor, evaluate, and report on results or outcomes of counterterrorism programming, especially when there is an interagency agreement between the CT Bureau and other U.S. government agencies on programming such as CVE. For example, an interagency agreement between the CT Bureau and United States Institute of Peace stipulates that reporting on the outcome of the programming is to include quarterly performance reports, interim reports, and final reports no later than 60 days after the termination of the agreement. In addition, some agency officials said that there are monitoring and evaluation mechanisms in place when implementing CT Bureau–funded programming. For example, Department of the Treasury officials stated that they are responsible for providing after-action reports to the CT Bureau on their program efforts related to activities with foreign governments aimed at strengthening anti-money laundering and combating the financing of terrorism regimes for CT Bureau-funded programming. Bridging organizational cultures. We found that while terminology may differ when discussing CVE, within State, some regional and functional bureau officials we spoke with stated that they use a common definition for CVE and apply the CVE strategy and policy that the CT Bureau has developed for CVE programming. Similarly, some officials in other U.S. government agencies we spoke with said they agree on common terms and outcomes of counterterrorism programming as ideas are discussed between the CT Bureau and the implementing agency, if the bureau funds a program or grant. For example, USAID officials said that while their agency’s definition of CVE differs from the CT Bureau’s, they have implemented programs on the ground in the Maghreb and the Sahel region on CVE capacity building using an agreed-upon common CVE definition. We also found that collaborating agencies reported frequent communication related to CVE programs. Specifically, we found that frequency of communication between the CT Bureau and other State bureaus as well as other U.S. government agencies varied depending on the project or activity and ranged from daily to monthly interactions. For example, on one of the CT Bureau–funded CVE projects implemented by the United States Institute of Peace, the implementing program specialist estimated that he has been in touch with his counterpart at State once every 2 days on average over the life of the program, which focuses on developing training for an international institution for CVE-related training in the United Arab Emirates. Leadership. For CVE and to some extent CTF, we found that officials at State and at other U.S. government agencies were generally aware of the agency or individual with leadership responsibility for the particular counterterrorism program. In addition, at the time of our review, officials said that they receive relevant and timely information on CVE-related programming from the bureau. Officials in State’s regional bureaus stated that they are generally aware of when the CT Bureau would have the lead on counterterrorism issues versus the regional bureaus. For example, some of these officials said that if a given issue involved policy and cross-cutting counterterrorism areas, the CT Bureau would take the lead on meetings and assigning tasks, whereas if the issue was more regional in nature, the regional bureaus would take the lead with support from the CT Bureau. We found that while the leadership for the CTF program was generally clear in the past, at the time of our review there was some uncertainty among officials as to whom they should be working with on CTF programming, because of the recent reorganization of the CT Bureau. For example, some U.S. government officials said that there had been a dedicated CTF unit within the bureau that dealt with CTF programming and also coordinated the interagency Terrorist Financing Working Group. However, with the elimination of the stand- alone CTF unit within the CT Bureau, it was not as clear to these officials who was the point of contact for CTF issues. At the time of our review, a few U.S. government agency officials said that it would be beneficial if the CT Bureau shared new contact information resulting from the recent reorganization; however, in the interim, the officials would still reach out to the point of contact that had been previously established for CTF issues. Clarity of roles and responsibilities. We found that there was general clarity on the roles and responsibilities of the participants collaborating on CVE and CTF counterterrorism programs with the CT Bureau. For example, several State officials said that for questions related to programs, such as CVE, they knew who their point of contact in the CT Bureau was and also what that person’s portfolio encompassed. We also found that the roles and responsibilities of participants are generally clarified in writing in cases where there is an interagency agreement between the bureau and implementing U.S. government agency partners on a particular program. For example, such agreements outline the roles and responsibilities of the requesting agency and the servicing agency. When referring to assessing the performance of counterterrorism programming, officials from both within State and other U.S. government agencies said that they were clear on whose responsibility it was to monitor and evaluate CVE and CTF programming activities. State and other agency officials understood that the CT Bureau would be responsible for ensuring evaluations of counterterrorism programs are conducted; however, the monitoring and reporting of the outcomes of CT Bureau–funded programs would be the responsibility of the implementing U.S. government agency partner of that program, or the recipient of the funding. Resources. According to information provided by the CT Bureau, it had provided funding for CVE or CTF programming activities to most of the agencies with which we spoke. The program funding for these activities came from the Nonproliferation, Antiterrorism, Demining, and Related Programs and Economic Support Fund funding accounts. According to information from the CT Bureau, from fiscal year 2011 to fiscal year 2014, it obligated over $11 million to agencies we spoke with for CVE programming and over $43 million to agencies we spoke with for CTF programming using interagency agreements or transfers. We found that, in cases where the CT Bureau funded U.S. government agencies on CVE or CTF programming, the funding mechanism was clear and laid out in the interagency agreements. Some agency officials told us that these agreements provide the vehicle whereby funding can be obligated from the CT Bureau to their agencies using a standard process. For example, Department of Homeland Security officials said that they have worked with the CT Bureau when receiving funding for cross-border financial training to be carried out in various countries and that the funding mechanism was clear. USAID officials stated that while USAID does not have interagency agreements with the CT Bureau, funding from the bureau for the CVE programs and activities that USAID administers comes through standard interagency transfers. Written guidance and agreements. We found that many of the U.S. government agencies we spoke with had formal interagency agreements with the CT Bureau on CVE- and CTF-related programming or activities. The interagency agreements described, among other things, the service to be provided, roles and responsibilities of each party, method and frequency of performance reporting, and accounting information for funding of the service provided. The interagency agreements we reviewed covered a multiyear agreement period. For example, the CT Bureau has an agreement with the Department of Justice Office of Overseas Prosecutorial Development, Assistance and Training on funding a Resident Legal Advisor in Panama who works with the host country government to enhance the capacity of criminal justice actors and institutions to handle financial crimes involving money laundering and terrorist finance. The interagency agreement in place covers funding for the period of 2014 through 2019. According to CT Bureau officials, as the scope or activities of the CT Bureau–funded programming changes, the interagency agreements can be modified. In addition, the CT Bureau and the Department of Justice have an interagency agreement on development of community police training in Bangladesh. We found that many of the State bureaus we spoke with that coordinate with the CT Bureau on CVE and CTF programs did not have any written agreements, such as memorandums of understanding or written guidance laying out the terms of the collaboration. However, several State officials indicated that formalized agreements were not necessary, as the collaboration between bureaus within State is routine and the CT Bureau has been effective in sharing information pertaining to the CVE programs. Given the critical importance of preventing terrorist attacks on the United States and its interests around the world, State elevated the Office of the Coordinator for Counterterrorism to the Bureau of Counterterrorism in fiscal year 2012 to lead the department’s effort to counter terrorism abroad and to secure the United States against terrorist threats and violent extremism. The CT Bureau recently has undertaken steps to address long-standing staffing gaps and it has placed a priority on efforts to counter violent extremism, among other things, since being elevated to a bureau in fiscal year 2012. Although the bureau has completed some program evaluations, it has yet to evaluate its past or current CVE efforts, an action that could help it make more informed decisions about programmatic efforts to counter violent extremism abroad. Also, while the CT Bureau has completed four program evaluations, resulting in 60 recommendations to improve its programs, it has implemented only about half of those recommendations and has not established time frames for addressing the remaining recommendations. Without specific time frames for completing actions in response to evaluation recommendations, it will be difficult for the bureau to ensure the timely implementation of programmatic improvements that would benefit both the country-specific efforts evaluated as well as the broader global program. Given that countering violent extremism is a priority for the U.S. government in general and State’s CT Bureau, we recommend that the Secretary of State take steps to ensure that CVE program efforts abroad are evaluated. To improve State’s CT Bureau’s program management efforts, we recommend that the Secretary of State take steps to ensure the CT Bureau establishes specific time frames for addressing recommendations from program evaluations. We provided a draft of this report to the Departments of State, Defense, Justice, Homeland Security, and the Treasury, and to USAID, the United States Institute of Peace, and the Office of the Director of National Intelligence for their review and comment. We received written comments from State, which are reprinted in appendix VII. State and Treasury provided technical comments, which we incorporated as appropriate. State concurred with our recommendation to conduct an evaluation of its overseas Countering Violent Extremism program efforts. Specifically, State indicated that it was currently assessing which programs would most benefit from third-party evaluation during the upcoming fiscal year and expected CVE to be included in its final determination. State also concurred with our recommendation to establish specific time frames for addressing recommendations from its program evaluations. State indicated that it will commit to setting a timetable for reviewing each recommendation by a third-party evaluator and implementing those actions that are deemed both implementable and worthwhile. We are sending copies of this report to the appropriate congressional committees, the Secretaries of State, Defense, Homeland Security, and the Treasury, the Attorney General of the United States, the USAID Administrator, the Director of National Intelligence, and the President, United States Institute of Peace. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7331 or johnsoncm@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. In addition to presenting information on the evolution of the Department of State’s (State) Bureau of Counterterrorism (CT Bureau) and changes in funding, the objectives of this review were to examine (1) how the CT Bureau’s staffing resources have changed since 2011, (2) the extent to which the bureau has assessed its performance since 2011, and (3) the extent to which the bureau’s coordination with U.S. government entities on selected programs is in line with key collaboration practices. To determine the extent to which the CT Bureau’s staffing resources have changed since 2011, we reviewed staffing allocation data from fiscal years 2011 to 2015. We received data on the total authorized full-time equivalent (FTE) positions, total established positions, and the total on- board positions for those fiscal years. State’s Office of the Executive Secretariat provided the FTE staffing data for fiscal years 2011 and 2012, and the CT Bureau, specifically the Executive Office, provided the FTE and other staffing data for fiscal years 2013 to 2015. To assess the reliability of the staffing data, we compared and corroborated information provided by State with staffing information in the Congressional Budget Justifications for the fiscal years as well as spoke to State officials regarding the processes they use to collect and verify the staffing data. On the basis of the checks we performed, we determined these data to be sufficiently reliable for the purposes of this report. To determine how the CT Bureau’s mission, organizational structure, and funding resources may have changed since 2011, we reviewed and analyzed State and CT Bureau documents pertaining to the mission, organization of the bureau, staffing, funding, and foreign assistance program allocations. We also interviewed State officials from the CT Bureau; Office of Inspector General; Office of U.S. Foreign Assistance Resources; and the Bureaus of Human Resources, Comptroller and Global Financial Services, Budgeting and Planning, and Administration. Specifically, for the mission statement, we reviewed the CT Bureau’s mission statement from 2011 and also from 2015, both reflected in bureau documents and on the bureau’s website, to ascertain what changes, if any, there have been to the bureau’s mission. We discussed with CT Bureau officials any changes to the bureau’s mission over time. To determine changes in the CT Bureau’s organizational structure, we reviewed the organizational chart of the Office of the Coordinator for Counterterrorism (CT Office) from 2011 as well as organizational charts of the CT Bureau covering 2012 through 2015. We also spoke to officials representing every directorate and most offices that had been established within the CT Bureau to understand their roles and responsibilities and any impact that the bureau’s strategic review and reorganization has had on their portfolios. To depict changes in resources, we reviewed data on the CT Bureau’s operations allocations and obligations from 2011 to 2015 as well as foreign assistance allocations that the bureau has received over the same time period. The allocations and obligations for the bureau’s operations were provided by the CT Bureau, while the allocations for the foreign assistance programs were provided by State’s Office of U.S. Foreign Assistance Resources. We depicted the allocated funding information based on the funding accounts as well as the foreign assistance programs they cover. To assess the reliability of the funding and allocations data, we spoke to State officials regarding the processes they use to collect and verify the data. On the basis of the checks we performed, we determined these data to be sufficiently reliable for the purposes of this report. To examine the extent to which the CT Bureau has assessed its performance since 2011, we reviewed bureau strategic plans, performance reports, program evaluation reports, and action plans for evaluation recommendations, as well as State policy and guidance documents outlining performance reporting and evaluation requirements for bureaus. Specifically, to examine the CT Bureau’s performance reporting efforts, we reviewed the bureau’s multiyear strategic plans and performance reports to determine whether the bureau had established performance measures for its foreign assistance–related goals and used established performance measures to assess the bureau’s progress toward achieving its goals, as required by State policy. While we reviewed documentation on the CT Bureau’s performance measures, and discussed the CT Bureau’s performance reporting efforts with cognizant State officials, we did not fully assess the reliability of these measures because our goal was to determine whether the bureau had established performance measures rather than describe the bureau’s actual performance. We are publishing the performance results the CT Bureau reported to provide context and additional support for our finding that the bureau has assessed its performance. To examine the CT Bureau’s program evaluation efforts, we reviewed evaluation reports and compared the number of evaluations the bureau completed against the number of evaluations required by State’s February 2012 evaluation policy. We also compared the bureau’s efforts to track and address recommendations from evaluations against internal control standards for the federal government and State’s January 2015 evaluation guidance, which provides specific criteria and guidelines for evaluating State programs. In addition, we interviewed or obtained written responses from officials from State’s Office of U.S. Foreign Assistance Resources and Bureau of Budget and Planning to clarify State’s performance reporting and evaluation requirements for bureaus and whether the CT Bureau had met the requirements. We also interviewed CT Bureau officials responsible for strategic planning and program monitoring and evaluation to obtain additional or clarifying information related to past or currently planned bureau efforts on performance reporting and evaluations. To examine the extent to which the CT Bureau’s coordination within State and other U.S. government entities on selected programs is in line with key collaboration practices and collaboration features, we reviewed agency documents and interviewed officials from various State regional and functional bureaus and other U.S. government agencies. Specifically, we spoke with officials representing regional bureaus—African Affairs, East Asian and Pacific Affairs, European and Eurasian Affairs, Near Eastern Affairs, South and Central Asian Affairs, and Western Hemisphere Affairs—and officials representing functional bureaus or offices—Center for Strategic Communications; Conflict and Stabilization Operations; Democracy, Human Rights, and Labor; Economic and Business Affairs; Educational and Cultural Affairs; International Narcotics and Law Enforcement Affairs; and International Organization Affairs. We also spoke with officials from the Departments of Defense, Homeland Security, Justice, and the Treasury; United States Agency for International Development; the National Counterterrorism Center; and the United States Institute of Peace in Washington, D.C. We focused on coordination on the Countering Violent Extremism (CVE) and Counterterrorism Finance (CTF) programs because these programs involve coordination with large numbers of agencies and State entities and also represent strategic priorities for the CT Bureau. We used GAO’s leading practices for implementing interagency collaborative mechanisms to better understand the extent and nature of collaboration between the CT Bureau and other bureaus within State and other U.S. government agencies on CT Bureau programs and compared CT Bureau’s coordination efforts against these key collaboration practices. We devised a standard set of questions that incorporated questions provided in GAO’s collaboration practices to ask State regional and functional bureaus and U.S. government agencies. We focused on six of the key collaboration practices: outcomes and accountability, bridging organizational cultures, leadership, clarity of roles and responsibilities, resources, and written guidance and agreements. We analyzed the information provided by State and agency officials against these practices to determine whether the collaboration between these entities and the CT Bureau on CVE and CTF were generally consistent with these practices. We conducted this performance audit from July 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to its key programs and activities, the Bureau of Counterterrorism manages or is responsible for a number of other counterterrorism-related efforts, including the following: Issuing Country Reports on Terrorism, which are annual, mandated reports to Congress that provide, among other things, an assessment of each country in which acts of international terror of major significance occurred and an assessment of each country whose territory is being used as a sanctuary for terrorist or terrorist organizations. Co-chairing the Technical Support Working Group (TSWG), which enhances the counterterrorism technology and equipment capabilities of U.S. government agencies and elements involved in counterterrorism and antiterrorism activities. The TSWG implements five bilateral research and development agreements with international partners. The cooperative programs with Israel, Canada, the United Kingdom, Australia, and Singapore allow the United States to leverage foreign experience, expertise, resources, and infrastructure to address commonly-held technical priorities for combating terrorism. Leading the Counterterrorism Preparedness Program, a series of international exercises designed to strengthen the U.S. and partner nations’ capacity to prevent, protect against, respond to, and recover from terrorist attacks, especially those involving weapons of mass destruction. Leading the Foreign Emergency Support Team, which is the U.S. government’s only interagency, on-call team poised to support embassies in responding to terrorist incidents worldwide. Preparing public designations of foreign terrorist organizations, designations that have legal consequences. The Department of State’s (State) Bureau of Counterterrorism (CT Bureau) is led by a Coordinator for Counterterrorism and is currently organized with four directorates and numerous offices and units that cover counterterrorism policy, strategy, planning, programming, and operations. According to the CT Bureau, the bureau’s final structure is pending until it has been approved by State’s management and incorporated into the department’s Foreign Affairs Manual. Pending incorporation into the Foreign Affairs Manual, the CT Bureau provided the following description of its directorates and offices. The Principal Deputy Coordinator for Counterterrorism serves as the senior deputy and advisor to the Coordinator for Counterterrorism and has the authority to act on the Coordinator’s behalf in his or her absence. The Principal Deputy Coordinator is responsible for overall management of the bureau and, in consultation with the Coordinator, plans and supervises the substantive work of the bureau, including public affairs outreach strategies. The Principal Deputy Coordinator represents the bureau in department and interagency groups as well as supervises subordinate offices, as directed by the Coordinator. The directorate is led by a Deputy Coordinator and is made up of two offices with a regional focus and one focused on multilateral affairs, according to the CT Bureau. The two regional offices—Office of Africa, Europe, and the Americas, and Office of South and Central Asia and the Near East—are engaged in day-to-day policy tasks and interactions that are managed by interagency working groups. This includes writing policy papers, providing policy guidance, and participating in interagency and within-State meetings to work on counterterrorism-related issues. The offices work very closely with State regional bureaus, focusing on terrorism-related issues, such as designation of terrorist groups or updates on terrorist threats or activities in a region or country. The Office of Multilateral Affairs handles multilateral engagements of the CT Bureau and works with multiple partners including the United Nations. The office tracks the work of the multilateral organizations, sets the agenda on U.S. counterterrorism issues at multilateral meetings, and develops multilateral counterterrorism programs to cover capacity- building goals as well as other counterterrorism strategic priorities of the CT Bureau and the rest of the U.S. government. One example of multilateral engagement is through the Global Counterterrorism Forum, which is an informal, multilateral counterterrorism platform that focuses on identifying critical civilian counterterrorism needs, mobilizing the necessary expertise and resources to address such needs, and generally enhancing global cooperation. The directorate is led by a Deputy Coordinator and covers three offices— Office of Homeland Security, Office of Terrorist Screening and Interdiction, and Office of Terrorist Designations and Sanctions, according to the CT Bureau. The Office of Homeland Security leads State’s efforts to deliver and implement core cross-cutting homeland security policies and programs that intersect with U.S. foreign policy development on counterterrorism issues and coordinates with other State bureaus and U.S. government agencies on homeland security issues such as border security, transportation security, and critical infrastructure protection. The Office of Terrorist Screening and Interdiction leads State’s policy development, interagency coordination, international engagement, and negotiations for the exchange of biographic terrorism screening information. It also coordinates programs to constrain terrorist mobility globally by helping countries at risk of terrorist activity or transit to enhance their border security capabilities. The Office of Terrorist Designations and Sanctions identifies and designates targets for listing as Foreign Terrorist Organizations and leads the mandated review of State’s designations under Foreign Terrorist Organization authorities. It also leads State’s coordination of policy on countering terrorism finance worldwide. The directorate is led by a Deputy Coordinator and covers one office, the Office of Operations, and three units—the Technical Programs Unit, the Policy Unit, and the Coordination Unit, according to the CT Bureau. The directorate coordinates State’s interagency efforts to plan and conduct sensitive counterterrorism operations worldwide. It also coordinates interagency and military counterterrorism activities and leads the Foreign Emergency Support Team, which is the U.S. government’s interagency team poised to respond quickly to terrorist incidents worldwide. The Office of the Executive Director provides executive management and direction for the CT Bureau in areas related to budget and finance, human resources, information technology, and communications. The office keeps track of CT Bureau reporting requirements such as quarterly reports and congressional notifications in order to execute the foreign assistance budget. The office also liaises with other government agencies on counterterrorism programmatic and management issues such as budget and financial management, training, and continuity of operations. The Office of Strategy, Plans, and Initiatives identifies, sets, coordinates, monitors, and adjusts CT Bureau counterterrorism priorities at the strategic level. The priorities are based on the current threat environment, partnership priorities, and monitoring priorities. The office also manages the bureau’s congressional affairs portfolio and provides broad guidance on counterterrorism policy and strategy to program implementers in State and other government agencies. The Office of Programs covers counterterrorism programming and implementation from a regional perspective focused on Countering Violent Extremism, Counterterrorism Finance, Antiterrorism Assistance, and Regional Strategic Initiative–funded programs and other counterterrorism-related programs. The office monitors the day-to-day activities at the program management level and makes sure that program implementers follow implementation agreements. The office also leads the monitoring and evaluation of programs to ensure that the programs follow statements of work with indicators. The Public Affairs Unit covers several functions, among other things, that involve writing press guidance, speeches, public remarks, and congressional testimony for senior CT Bureau personnel. The unit reviews and clears reports from other offices within the CT Bureau and also clears press releases and guidance related to counterterrorism issues. The unit also promotes the CT Bureau’s mission and events via social media. The unit is also involved in the tasking, coordinating, and drafting of State’s annual Country Reports on Terrorism. The Foreign Terrorist Fighters Unit, led by an ambassador-level Senior Advisor, leads State’s and interagency efforts in engaging with foreign partners to prevent and interdict foreign extremist travel to Syria and Iraq. The unit coordinates bureau and interagency strategy and initiatives on the foreign fighter issue to advise principals on the latest developments surrounding this problem set. The unit screens cables, intelligence reports, and academic research, and briefs the Senior Advisor and other principals as necessary. The unit also meets with foreign partners to exchange information, coordinate actions, and assist in the development of press guidance on these issues. The Countering Violent Extremism (CVE) Unit was created to elevate and advance the CT Bureau’s policy work on countering and preventing violent extremism. To accomplish this goal, the CVE policy unit helps formulate and develops department, interagency, diplomatic, and multilateral efforts and initiatives to identify and address the drivers of violent extremism. The unit also engages with and supports other CVE- specific and relevant elements within State and with other agencies. Foreign assistance allocations to the Department of State’s Bureau of Counterterrorism (CT Bureau) fund counterterrorism-related programs that the bureau oversees. These allocations increased from fiscal years 2011 to 2012 as the CT Bureau transitioned from an office to a bureau. These allocations decreased thereafter from $154 million in fiscal year 2012 to $110 million in fiscal year 2014, as shown in figure 8. The majority of these allocations are from the Nonproliferation, Antiterrorism, Demining, and Related Programs account, which funds all six counterterrorism-related programs listed in the figure. These programs support a variety of activities including antiterrorism training and equipment, building foreign partner capacity to counter violent extremism, counterterrorism engagement with foreign partners, and anti-money laundering and counterterrorism finance training. Allocations from the Economic Support Fund support those Countering Violent Extremism and Counterterrorism Engagement program activities that do not involve law enforcement entities. For fiscal year 2015, the CT Bureau requested $104.4 million in allocations to fund the six programs. The Bureau of Counterterrorism receives funds from two sources to support its core operations: the Diplomatic and Consular Programs and the Worldwide Security Programs accounts. The base operations of the bureau cover all travel, the majority of contracts, supplies, staffing costs, telephone, information technology, and printing and equipment. Figure 9 shows the bureau’s total obligations for its overall operations since fiscal year 2012. These obligations increased from fiscal year 2012 to 2013 as the bureau was being established. The base operations obligations then decreased from about $7.7 million in fiscal year 2013 to about $4.8 million in fiscal year 2014. The Bureau of Counterterrorism’s overall budget is made up of the following components: (1) Base Operations covers all travel, the majority of contracts, supplies, staffing costs, telephone, information technology, and printing and equipment for the bureau; (2) Regional Security Initiative covers the travel for the bureau’s six field coordinators based at U.S. embassies as well as two to four regional conferences per year that bring together personnel from the embassies from various regions to discuss counterterrorism policy and strategy for that region; (3) Foreign Emergency Support Team supports the Operations Directorate’s activities, including its travel, equipment, and post support costs; (4) Technical Support Working Group supports the Operations Directorate’s activities, including its travel and contribution to the Department of Defense; and (5) the Worldwide Security Program account covers the costs of the contracts related to staff and other direct costs supporting the Counterterrorism Preparedness Program. According to the Department of State’s (State) Bureau of Counterterrorism (CT Bureau), it has used a number of ways to assess its resource and workforce planning needs. CT Bureau officials reported that it uses State’s Domestic Staffing Model to establish human resource demand for its workforce and to also make staffing decisions across the bureau. According to the Office of Resource Management and Analysis in State’s Bureau of Human Resources, bureau managers can use the data in the model to make various decisions on current levels of work and the personnel resources performing the work. For example, the baseline could provide insight into the level of effort being expended on each function and managers could assess whether their practices are consistent with bureau priorities. The Domestic Staffing Model contains grade and skill level information for personnel performing functions in each bureau, which allows the model to predict what type of personnel resources would be required to support the expected workload increases. Moreover, this in turn could provide valuable information for recruitment, succession planning, and training purposes. However, the most recent data collection for the Domestic Staffing Model was conducted in spring 2011, when the CT Bureau was the Office of the Coordinator for Counterterrorism and thus the model does not reflect the current CT Bureau’s organizational structure. According to State officials, the information for the CT Bureau will be updated in the Domestic Staffing Model by the end of 2015. CT Bureau officials also reported that the bureau looks at its resource needs during the annual planning and budgeting process that entails CT Bureau directors analyzing or assessing the workload of the staff in their respective areas and providing detailed justification for each full-time equivalent staff position requested. All requests and justifications for additional full-time equivalent staff positions are vetted with the Principal Deputy Coordinator for Counterterrorism and if approved, presented to the Coordinator for Counterterrorism for approval as part of the overall bureau budget request, according to the CT Bureau. Once updated, the Domestic Staffing Model, along with the annual planning and budgeting process, would be a reasonable approach at looking at workforce planning and consistent with best practices. Best practices for effective strategic workforce planning should, among other things, address the following key principles: determine the critical skills and competencies that will be needed to achieve current and future programmatic results; develop strategies that are tailored to address gaps in number, deployment, and alignment of human capital approaches for enabling and sustaining the contributions of all critical skills and competencies; and monitor and evaluate the progress toward its human capital goals. In addition to the contact named above, Jason Bair (Assistant Director), Andrea Riba Miller (Analyst-in-Charge), Esther Toledo, Ashley Alley, David Dayton, Martin de Alteriis, and Laurani Singh made key contributions to this report. Tina Cheng, Steven Lozano, and Sarah Veale provided technical assistance. | Terrorism and violent extremism continue to pose a global threat, and combating them remains a top priority for the U.S. government. State leads and coordinates U.S. efforts to counter terrorism abroad. State's Office of the Coordinator for Counterterrorism was elevated to bureau status in 2012 with the aim of enhancing State's ability to counter violent extremism, build partner counterterrorism capacity, and improve coordination. GAO was asked to review the effects of this change and the new bureau's efforts. This report examines (1) how the bureau's staffing resources have changed since 2011, (2) the extent to which the bureau has assessed its performance since 2011, and (3) the extent to which the bureau's coordination with U.S. government entities on selected programs is in line with key collaboration practices. To address these objectives, GAO reviewed and analyzed State and other U.S. government agency information and interviewed U.S. government officials in Washington, D.C. The Department of State's (State) Bureau of Counterterrorism has had an annual increase in authorized full-time equivalent (FTE) positions since fiscal year 2011 and has recently undertaken efforts to reduce a persistent staffing gap. The bureau's authorized FTEs increased from 66 in fiscal year 2011 to 96 in fiscal year 2015, and over the same period, FTE vacancies ranged from 17 to 23 percent. The vacancies included both staff and management positions. Bureau officials said they postponed filling some positions until the Coordinator for Counterterrorism had sufficient time to assess the bureau's needs and priorities. A senior Bureau of Counterterrorism official testified before Congress in June 2015 that the bureau was making progress and that it had 11 vacancies. However, we have not been able to verify that 4 of the reportedly filled positions have been filled because State did not provide sufficient documentation. While the bureau has undertaken efforts to assess its progress, it has not yet evaluated its priority Countering Violent Extremism (CVE) program and has not established time frames for addressing recommendations from program evaluations. Specifically, the bureau established indicators and targets for its foreign assistance–related goals and reported results achieved toward each indicator. The bureau has also completed four evaluations covering three of its six programs that resulted in 60 recommendations. The bureau reported having implemented about half of the recommendations (28 of 60) as of June 2015 but has not established time frames for addressing the remaining recommendations. Without specific time frames, it will be difficult for the bureau to ensure timely implementation of programmatic improvements. In addition, despite identifying its CVE program as a priority and acknowledging the benefit of evaluating it, the bureau has postponed evaluating it each fiscal year since 2012. The bureau's coordination on two programs GAO reviewed, CVE and Counterterrorism Finance, generally reflects key practices for effective collaboration. For example, GAO identified efforts to define outcomes and accountability, bridge organizational cultures, and establish written guidance and agreements—all key practices of effective collaboration. GAO recommends that the Secretary of State take steps to (1) ensure that CVE program efforts abroad are evaluated and (2) establish time frames for addressing recommendations from program evaluations. State concurred with both of GAO's recommendations. State indicated that it was currently assessing which programs would benefit from a third-party evaluation and that it would commit to setting a timetable for reviewing each recommendation by a third-party evaluator. |
As computer technology has advanced, both government and private entities have become increasingly dependent on computerized information systems to carry out operations and to process, maintain, and report essential information. Public and private organizations rely on computer systems to transmit sensitive and proprietary information, develop and maintain intellectual capital, conduct operations, process business transactions, transfer funds, and deliver services. In addition, the Internet has grown increasingly important to American business and consumers, serving as a medium for hundreds of billions of dollars of commerce each year. Consequently, ineffective information security controls can result in significant risks, including loss or theft of resources, including money and intellectual property; inappropriate access to and disclosure, modification, or destruction of sensitive information; use of computer resources for unauthorized purposes or to launch attacks on other computers systems; damage to networks and equipment; loss of business due to lack of customer confidence; and increased costs from remediation. Cyber-based threats are evolving and growing and arise from a wide array of sources. These sources include business competitors, corrupt employees, criminal groups, hackers, and foreign nations engaged in espionage and information warfare. These threat sources vary in terms of the capabilities of the actors, their willingness to act, and their motives, which can include monetary gain or political advantage, among others. Table 1 shows common sources of cyber threats. These sources of cyber threats make use of various techniques, or exploits, to adversely affect an organization’s computers, software, or networks, or to intercept or steal valuable or sensitive information. Table 2 provides descriptions of common types of cyber exploits. Cyberspace—where much business activity and the development of new ideas often take place—amplifies these threats by making it possible for malicious actors to quickly steal and transfer massive quantities of data while remaining anonymous and difficult to detect. For example, cyber attackers do not need to be physically close to their victims, technology allows attacks to easily cross state and national borders, attacks can be carried out at high speed and directed at a number of victims simultaneously, and cyber attackers can more easily remain anonymous. Moreover, the use of these and other techniques is becoming more sophisticated, with attackers using multiple or “blended” approaches that combine two or more techniques. Using such techniques, threat actors may target individuals, resulting in loss of privacy or identity theft; businesses, resulting in the compromise of proprietary information or intellectual property; critical infrastructures, resulting in their disruption or destruction; or government agencies, resulting in the loss of sensitive information and damage to economic and national security. Reports of cyber incidents affecting both public and private institutions are widespread. The U.S. Computer Emergency Readiness Team (US- CERT) receives computer security incident reports from federal agencies, state and local governments, commercial enterprises, U.S. citizens, and international computer security incident response teams. In its fiscal year 2011 report to Congress on implementation of the Federal Information Security Management Act of 2002, the Office of Management and Budget reported that US-CERT received over 100,000 total incident reports in fiscal year 2011. Over half of these (about 55,000) were phishing exploits; other categories of incidents included virus/Trojan horse/worm/logic bombs; malicious websites; policy violations; equipment theft or loss; suspicious network activity; attempted access; and social engineering. Private sector organizations have experienced a wide range of incidents involving data loss or theft, economic loss, computer intrusions, and privacy breaches, underscoring the need for improved security practices. The following examples from news media and other public sources illustrate that a broad array of information and assets remain at risk. In March 2012, it was reported that a security breach at Global Payments, a firm that processed payments for Visa and Mastercard, could compromise the credit- and debit-card information of millions of Americans. Subsequent to the reported breach, the company’s stock fell more than 9 percent before trading in its stock was halted. Visa also removed the company from its list of approved processors. In March 2012, it was reported that Blue Cross Blue Shield of Tennessee paid out a settlement of $1.5 million to the U.S. Department of Health and Human Services arising from potential violations stemming from the theft of 57 unencrypted computer hard drives that contained protected health information of over 1 million individuals. In April 2011, Sony disclosed that it suffered a massive breach in its video game online network that led to the theft of personal information, including the names, addresses, and possibly credit card data belonging to 77 million user accounts. In February 2011, media reports stated that computer hackers had broken into and stolen proprietary information worth millions of dollars from the networks of six U.S. and European energy companies. A retailer reported in May 2011 that it had suffered a breach of its customers’ card data. The company discovered tampering with the personal identification number (PIN) pads at its checkout lanes in stores across 20 states. In mid-2009 a research chemist with DuPont Corporation reportedly downloaded proprietary information to a personal e-mail account and thumb drive with the intention of transferring this information to Peking University in China and also sought Chinese government funding to commercialize research related to the information he had stolen. Between 2008 and 2009, a chemist with Valspar Corporation reportedly used access to an internal computer network to download secret formulas for paints and coatings, reportedly intending to take this proprietary information to a new job with a paint company in Shanghai, China. In December 2006, a product engineer with Ford Motor Company reportedly copied approximately 4,000 Ford documents onto an external hard drive in order to acquire a job with a Chinese automotive company. These incidents illustrate the serious impact that cyber threats can have on, among other things, the security of sensitive personal and financial information and proprietary information and intellectual property. While these effects can be difficult to quantify monetarily, they can include any of the following: For consumers or private citizens: identity theft or compromise of personal and economic information and costs associated with lower- quality counterfeit or pirated goods. For business: lost sales, lost brand value or damage to public image, cost of intellectual property protection, and decreased incentive to invest in research and development. For the economy as a whole: lower economic growth due to reduced incentives to innovate and lost revenue from declining U.S. trade with countries that have weak IP rights regimes. The prevalence of cyber threats and the risks they pose illustrate the need for security controls and other actions that can reduce organizations’ vulnerability to such attacks. As we have reported, there are a number of cybersecurity technologies that can be used to better protect systems from cyber attacks, including access control technologies, system integrity technologies, cryptography, audit and monitoring tools, and configuration management and assurance technologies. In prior reports, we have made hundreds of recommendations to federal agencies to better protect their systems and cyber-reliant critical infrastructures. Table 3 summarizes some of the common cybersecurity technologies, categorized by the type of security control they help to implement. In addition, the use of an overall cybersecurity framework can assist in the selection of technologies to protect an organization against cyber attacks. Such a framework includes determining the business requirements for security; performing risk assessments; establishing a security policy; implementing a cybersecurity solution that includes people, process, and technology to mitigate identified security risks; and continuously monitoring and managing security. Risk assessments, which are central to this framework, help organizations determine which assets are most at risk and to identify countermeasures to mitigate those risks. Risk assessment is based on a consideration of threats and vulnerabilities that could be exploited to inflict damage. Even with such a framework, there often are competing demands for cybersecurity investments. For example, for some companies, mitigating physical risks may be more important than mitigating cyber risks. Further, investing in cybersecurity technologies needs to make business sense. It is also important to bear in mind the limitations of some cybersecurity technologies and to be aware that their capabilities should not be overstated. Technologies do not work in isolation. Cybersecurity solutions make use of people, process, and technology. Cybersecurity technology must work within an overall security process and be used by trained personnel. We have also emphasized the importance of public-private partnerships for sharing information and implementing effective cybercrime prevention strategies. Similarly, the Office of the National Counterintelligence Executive has identified a series of “best practices in data protection strategies and due diligence for corporations.” strategy; insider threat programs and awareness; effective data management; network security, auditing, and monitoring; and contingency planning. Multiple federal agencies undertake a wide range of activities in support of IP rights. Some of these agencies are the Departments of Commerce (including the U.S. Patent and Trademark Office), State, Justice (including the FBI), Health and Human Services, and Homeland Security; the U.S. Trade Representative; the U.S. Copyright Office; and the U.S. International Trade Commission. In many cases, IP-related efforts represent a small part of the agencies’ much broader missions. Office of the National Counterintelligence Executive, Foreign Spies Stealing U.S. Economic Secrets in Cyberspace. Department of Justice’s (DOJ) U.S. attorneys offices, Criminal Division, and the FBI investigate and prosecute federal IP crimes. DOJ established the Computer Hacking and Intellectual Property program, which consists of specially trained assistant U.S. attorneys to pursue IP cases. Each of the 93 U.S. attorneys offices throughout the country have assistant U.S. attorneys designated as Computer Hacking and Intellectual Property coordinators, who are available to work on IP cases. In addition, DOJ has created Computer Hacking and Intellectual Property units in 25 U.S. attorney’s offices with histories of large IP case loads. DOJ’s Computer Crime and Intellectual Property Section—based in Washington, D.C.— consists of prosecutors devoted to enforcing computer crime and IP laws. Computer Crime and Intellectual Property Section attorneys prosecute cases, assist prosecutors and other investigative agents in the field, and help develop and implement an overall criminal enforcement strategy. The FBI’s Cyber Division oversees the bureau’s IP enforcement efforts; though not all of its IP investigations are cyber-related. Over the years, Congress and the administration have created interagency mechanisms to coordinate federal IP law enforcement efforts. These include the National Intellectual Property Law Enforcement Coordination Council (NIPLECC), created in 1999 to coordinate U.S. law enforcement efforts to protect and enforce IP rights in the United States and abroad and the Strategy for Targeting Organized Piracy initiative, created by the President in 2004 to target cross-border trade in tangible goods and strengthen U.S. government and industry IP enforcement action. In December 2004, Congress passed legislation to enhance NIPLECC’s mandate and created the position of the Coordinator for International Intellectual Property Enforcement, located within the Department of Commerce, to lead NIPLECC. In November 2006 we reported that NIPLECC continued to face persistent difficulties, creating doubts about its ability to carry out its mandate. We also noted that while the Strategy for Targeting Organized Piracy had brought attention and energy to IP efforts within the U.S. government, it had limited usefulness as a tool to prioritize, guide, implement, and monitor the combined efforts of multiple agencies. In 2008, Congress passed the Prioritizing Resources and Organization for Intellectual Property Act (PRO-IP Act), which, among other things, created the position of the Intellectual Property Enforcement Coordinator (IPEC) to serve within the Executive Office of the President. The duties of the coordinator outlined in the act include specific efforts to enhance interagency coordination, such as the development of a comprehensive joint strategic plan. The act also required the Attorney General to devote additional resources to IP enforcement and undertake other IP- enforcement-related efforts. In October 2010, we noted that DOJ and FBI officials and Office of the IPEC staff reported taking many actions to implement the requirements of the PRO-IP Act. Moreover, the IPEC coordinated with other federal entities to deliver the 2010 Joint Strategic Plan on Intellectual Property Enforcement to Congress and the public. We reported that the plan addressed the content requirements of the act, but that enhancements were needed, such as identifying responsible departments and entities for all action items and estimates of resources needed to carry out the plan’s priorities. Accordingly, we recommended that the IPEC take steps to ensure that future strategic plans address these elements. IPEC staff generally concurred with our findings and recommendations. In summary, the ongoing efforts to steal U.S. companies’ intellectual property and other sensitive information are exacerbated by the ever- increasing prevalence and sophistication of cyber-threats facing the nation. Recently reported incidents show that such actions can have serious impact not only on individual businesses, but on private citizens and the economy as a whole. While techniques exist to reduce vulnerabilities to cyber-based threats, these require strategic planning by affected entities. Moreover, effective coordination among federal agencies responsible for protecting IP and defending against cyber- threats, as well as effective public-private partnerships, are essential elements of any nationwide effort to protect America’s businesses and economic security. Chairman Meehan, Ranking Member Higgins, and Members of the Subcommittee, this concludes my statement. I would be happy to answer any questions you have at this time. If you have any questions regarding this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov. Other key contributors to this statement include Michael Gilmore and Anjalique Lawrence (Assistant Directors), Bradley Becker, Kush Malhotra, and Lee A. McCracken. Cybersecurity: Threats Impacting the Nation. GAO-12-666T. Washington, D.C.: April 24, 2012 IT Supply Chain: National Security-Related Agencies Need to Better Address Risks. GAO-12-361. Washington, D.C.: March 23, 2012. Critical Infrastructure Protection: Cybersecurity Guidance Is Available, but More Can Be Done to Promote Its Use. GAO-12-92. Washington, D.C.: December 9, 2011. Cybersecurity: Continued Attention Needed to Protect Our Nation’s Critical Infrastructure. GAO-11-865T. Washington, D.C.: July 26, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Electricity Grid Modernization: Progress Being Made on Cybersecurity Guidelines, but Key Challenges Remain to be Addressed. GAO-11-117. Washington, D.C.: January 12, 2011. Intellectual Property: Agencies Progress in Implementing Recent Legislation, but Enhancements Could Improve Future Plans. GAO-11-39. Washington, D.C.: October 13, 2010. Critical Infrastructure Protection: Key Private and Public Cyber Expectations Need to Be Consistently Addressed. GAO-10-628. Washington, D.C.: July 15, 2010. Cyberspace: United States Faces Challenges in Addressing Global Cybersecurity and Governance. GAO-10-606. Washington, D.C.: July 2, 2010. Cybersecurity: Continued Attention Is Needed to Protect Federal Information Systems from Evolving Threats. GAO-10-834T. Washington, D.C.: June 16, 2010. Intellectual Property: Observations on Efforts to Quantify the Economic Effects of Counterfeit and Pirated Goods. GAO-10-423. Washington, D.C.: April 12, 2010. Cybersecurity: Progress Made but Challenges Remain in Defining and Coordinating the Comprehensive National Initiative. GAO-10-338. Washington, D.C.: March 5, 2010. Intellectual Property: Enhancements to Coordinating U.S. Enforcement Efforts. GAO-10-219T. Washington, D.C.: December 9, 2009. National Cybersecurity Strategy: Key Improvements Are Needed to Strengthen the Nation’s Posture. GAO-09-432T. Washington, D.C.: March 10, 2009. Intellectual Property: Federal Enforcement Has Generally Increased, but Assessing Performance Could Strengthen Law Enforcement Efforts. GAO-08-157. Washington, D.C.: March 11, 2008. Cybercrime: Public and Private Entities Face Challenges in Addressing Cyber Threats. GAO-07-705. Washington, D.C.: June 22, 2007. Intellectual Property: Strategy for Targeting Organized Piracy (STOP) Requires Changes for Long-term Success. GAO-07-74. Washington, D.C.: November. 8, 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The threat of economic espionagethe theft of U.S. proprietary information, intellectual property (IP), or technology by foreign companies, governments, or other actorshas grown. Moreover, dependence on networked information technology (IT) systems has increased the reach and potential impact of this threat by making it possible for hostile actors to quickly steal massive amounts of information while remaining anonymous and difficult to detect. To address this threat, federal agencies have a key role to play in law enforcement, deterrence, and information sharing. Consistent with this threat, GAO has designated federal information security as a governmentwide high-risk area since 1997 and in 2003 expanded it to include protecting systems and assets vital to the nation (referred to as critical infrastructures). GAO was asked to testify on the cyber aspects of economic espionage. Accordingly, this statement discusses (1) cyber threats facing the nations systems, (2) reported cyber incidents and their impacts, (3) security controls and other techniques available for reducing risk, and (4) the responsibilities of key federal entities in support of protecting IP. To do this, GAO relied on previously published work in this area, as well as reviews of reports from other federal agencies, media reports, and other publicly available sources. The nation faces an evolving array of cyber-based threats arising from a variety of sources. These sources include criminal groups, hackers, terrorists, organization insiders, and foreign nations engaged in crime, political activism, or espionage and information warfare. These threat sources vary in terms of the capabilities of the actors, their willingness to act, and their motives, which can include monetary gain or political advantage, among others. Moreover, potential threat actors have a variety of attack techniques at their disposal, which can adversely affect an organizations computers or networks and be used to intercept or steal valuable information. The magnitude of the threat is compounded by the ever-increasing sophistication of cyber attack techniques, such as attacks that may combine multiple techniques. Using these techniques, threat actors may target individuals and businesses, resulting in, among other things, loss of sensitive personal or proprietary information. These concerns are highlighted by reports of cyber incidents that have had serious effects on consumers and businesses. These include the compromise of individuals sensitive personal data such as credit- and debit-card information and the theft of businesses IP and other proprietary information. While difficult to quantify monetarily, the loss of such information can result in identity theft; lower-quality counterfeit goods; lost sales or brand value to businesses; and lower overall economic growth and declining international trade. To protect against these threats, a variety of security controls and other techniques are available. These include technical controls such as those that manage access to systems, ensure system integrity, and encrypt sensitive data. But they also include risk management and strategic planning that organizations undertake to improve their overall security posture and reduce their exposure to risk. Further, effective public-private partnerships are a key element for, among other things, sharing information about threats. Multiple federal agencies undertake a wide range of activities in support of IP rights. Some of these agencies include the Departments of Commerce, Justice, and Homeland Security, among others. For example, components within the Justice Department and the Federal Bureau of Investigation are dedicated to fighting computer-based threats to IP. In addition, both Congress and the Administration have established interagency mechanisms for better coordinating the protection of IP. Ensuring effective coordination will be critical for better protecting the economic security of Americas businesses. In prior reports, GAO has made hundreds of recommendations to better protect federal systems, critical infrastructures, and intellectual property. |
Nurse recruitment and retention is essential for VHA to carry out its mission to provide quality care that improves the health and well-being of veterans. In its 2014 Interim Workforce and Succession Strategic Plan, VHA identified nurses as the second most mission-critical occupation for recruitment and retention; only physicians ranked higher. As the demand for health care services increases, effective nurse recruitment and retention is increasingly important for VHA to ensure an adequate and qualified workforce. In the last 5 years, the number of nurses providing care to veterans has increased, and VHA expects it will continue to increase because of the expected increased demand for services. In FY 2014, VHA employed more than 85,000 nurses who provided both direct and indirect care to patients through its health care system. The number of nurses providing direct patient care has increased from about 72,000 to about 82,000— approximately a 14 percent increase—from FY 2010 through FY 2014, while the number of unique patients served increased from about 6.0 million to about 6.6 million—approximately a 10 percent increase—during this same time period. VHA projects that approximately 40,000 new RNs will be needed through FY 2018 to maintain adequate staffing levels, including replacing retired nurses, to meet veterans’ needs. (See app. I for the number of nurses providing direct and indirect care at VA medical centers from FY 2010 through FY 2014.) In addition to the need for more nurses due to an increasing number of veterans, VHA anticipates that changes in veteran demographics, including an aging population, will increase the need for nurses to provide more complex types of services to care for veterans. In its 2014 Interim Workforce and Succession Strategic Plan, VHA reported that after 2015, the largest segment of the veteran population will be between 65 and 84 years of age. Also, the number of women veterans receiving care through VHA has nearly doubled since 2004, requiring changes to the type of care provided and corresponding skills needed. VHA estimates that veteran usage of primary care, surgical specialty care, and mental health care will each increase by more than 20 percent over the next 10 years. The nurse skill mix—the proportion of each type of nurse (NPs, RNs, LPNs, and NAs) of the total nursing staff in a particular unit or medical center—is an important component of VHA nurse staffing, as the level of education and training for each nurse position determines the types of services that can be provided. (See table 1 for VHA nurse positions, responsibilities, and educational requirements.) For example, intensive care units require higher intensity nursing, and may have a skill mix that is primarily composed of RNs compared to other types of units that may provide less complex care, such as outpatient clinics. In the last 5 fiscal years, RNs comprised the largest percentage of nurses within VHA, and were approximately 64 percent of the nurse workforce in FY 2014. NPs comprised the smallest percentage over the same period. (See fig. 1.) For the first time, in FY 2015, VA began collecting data on the number of nurse hires and vacancies at each of its medical centers. For FY 2015, as of June, VA medical centers hired approximately 8,600 nurses; approximately 5,100 (59 percent) were RNs, and approximately 430 were NPs (5 percent), reflecting VHA’s need for nurses with advanced skills and education. Despite these new hires, VHA estimated that there were about 17,000 vacancies across VA medical centers as of June 2015, with about 12,100 (71 percent) for RN positions. (See app. I for the number of nurse hires and losses at VA medical centers for FY 2015, as of June.) The average national nurse turnover rate for VHA from FY 2010 through FY 2014 was 7.6 percent. The turnover rates for NPs and RNs increased over this same period, and in FY 2014, were 9.1 percent and 7.8 percent, respectively. VHA reported high projected losses for nurses, such as from retirement, in the next few fiscal years. In 2014, for example, VHA reported that by FY 2019, approximately 20 percent of its nurses will be eligible for retirement. Retirement and career advancement through opportunities elsewhere were the top two reasons why nurses reportedly left VHA. In addition, according to findings from VHA’s 2015 Workforce Planning Report, approximately 12 percent of all nurses that left VHA in FY 2012 did so in their first year of employment. (See app I. for annual nurse turnover rates by position type for FY 2010 through FY 2014.) VA medical centers are responsible for recruiting and retaining nurses in their respective facilities, with VHA providing support to assist them. Specifically, VHA has developed initiatives that medical centers may offer to help with the recruitment and retention of nurses. VHA also provides guidance and policies to its medical centers on the process of recruiting and hiring nurses and on the initiatives medical centers may use to help with recruitment and retention. Additionally, VHA provides marketing services and tools to medical centers, including marketing campaigns that advertise the benefits of working at VHA and recruitment brochures that medical centers can use at local career fairs. Nurse recruitment begins with advertising and publicizing available positions to encourage potential candidates to apply, through various channels, including through local publications, job fairs, and USAjobs.gov. Once medical centers recruit, interview, and select a nursing candidate, that nursing candidate goes through a process known as onboarding and credentialing. Most medical centers employ nurse recruiters, who are responsible for managing the administrative components of the hiring process, as well as various aspects of nurse recruitment and retention. The nurse recruiter position varies among medical centers. Some medical centers assign the nurse recruiter to the medical center’s clinical nursing services office, and these nurse recruiters are typically RNs. Other medical centers assign nurse recruiters to the medical center’s human resources office, and these nurse recruiters may not have clinical backgrounds. VHA has multiple system-wide initiatives to recruit and retain its nurse workforce, but some VA medical centers face challenges in offering them to nurses and with recruitment and retention more broadly. We found that VHA has eight key initiatives that medical centers may offer to help them recruit and retain nurses. (See table 2.) VHA’s initiatives focus primarily on providing (1) education and training, and (2) financial benefits and incentives. (See app. II for VHA expenditures for and nurse participation in key recruitment and retention initiatives from FY 2010 through FY 2014.) With the exception of the mandatory RN Transition to Practice initiative, VA medical centers generally have discretion to offer any of VHA’s initiatives to nurses, including the discretion to submit requests for proposals for any of the initiatives that require them. The four VA medical centers in our review varied in the number of initiatives they offered from FY 2010 through FY 2014. (See table 3.) For example, one of the medical centers in our review offered three of the four education and training initiatives—RN Transition to Practice, VA Nursing Academic Partnerships, and VA Learning Opportunities Residency. This medical center also offered the Post-Baccalaureate Nurse Residency—which, beginning in FY 2015, is part of the VA Nursing Academic Partnerships—and developed curricula to move participants through the initiatives. This medical center also offered all four of the financial initiatives—recruitment, retention, and relocation incentives; the Education Debt Reduction Program; the Employee Incentive Scholarship Program; and flexible work schedules. The medical center ceased offering recruitment, retention, and relocation incentives in 2013; according to medical center officials, VHA introduced new employee performance criteria that medical center officials felt were too difficult for employees to achieve or for medical centers to provide justification for retention incentives. Officials from all four medical centers reported offering flexible work schedules to provide nurses with options when trying to maintain work life balance, such as offering nurses compressed schedules (e.g., 10-hour shifts, 4 days a week). While VA medical centers generally have discretion to offer any of VHA’s initiatives, all medical centers that employed RNs with less than 1 year of nursing experience were required to offer the RN Transition to Practice initiative. However, officials from two medical centers in our review reported not offering the initiative at all or not offering it across all 5 fiscal years. Officials from one medical center offered the RN Transition to Practice initiative for 1 year, beginning in 2012, but subsequently decided not to hire newly graduated nurses because of the extensive orientation and training they required. According to officials, after one of its current LPNs returned to school to become an RN, this medical center coordinated with another VA medical center in the region for this new RN to participate in that medical center’s RN Transition to Practice curriculum. Officials from the second medical center told us that it offered a 16-week program designed to help new nurses acclimate to VA but did not offer VHA’s 12-month RN Transition to Practice initiative because they did not believe it was required. In addition to offering VHA’s initiatives, three of the four medical centers in our review developed local recruitment and retention initiatives. Two medical centers developed initiatives to provide employment to train student nurses; the medical centers’ initiatives were similar to the VA Learning Opportunities Residency. Officials from one of these medical centers told us that the medical center developed a local initiative because the nursing schools in the region offered associate degrees only; whereas, VHA’s initiative requires medical centers to partner with schools of nursing with baccalaureate degree programs. The other medical center offered the VA Learning Opportunities Residency, as well as its own student nurse employment and training initiative. Officials from a third medical center in our review told us that the medical center offered a 16- week RN Transition to Practice initiative to train new RN graduates; these RNs are hired on a temporary basis and are hired as full-time employees when RN vacancies open. Officials from three of the four medical centers in our review reported that VHA’s initiatives helped improve their ability to recruit and retain nurses, as shown in the following examples: Officials from one medical center reported that they hired 9 of the 10 nurses who participated in the VHA Post-Baccalaureate Nurse Residency as full-time nurses in academic year 2012-2013, the first year the medical center offered the initiative. The medical center retained 7 of these 9 nurses as of the end of the following academic year 2013-2014. Officials from another medical center that offered the Education Debt Reduction Program reported that, of the six nurses that began the program since 2010, five completed the 5-year service agreement and, as of April 2015, remained employees of the medical center. Officials from one medical center that offered the Employee Incentive Scholarship Program reported that 23 nurses completed the program over the past 10 years, and, as of February 2015, 21 of those nurses have remained employees of the medical center. Despite these successes, however, officials from three of the four medical centers in our review reported challenges with offering VHA’s initiatives specifically, and recruiting and retaining nurses more broadly, both of which limited the initiatives’ usefulness. These challenges—lack of sufficient administrative support, competition with private sector medical facilities for qualified and skilled nurses, the rural location of the medical center, and employee dissatisfaction—may affect medical centers’ ability to effectively and efficiently recruit and retain nurses. Lack of sufficient administrative support. Officials from one medical center reported challenges in efficiently offering some of the initiatives due to the lack of sufficient administrative support. Specifically, medical center officials reported not having sufficient human resources and clerical staff to process in a timely manner the paperwork associated with specific VHA recruitment and retention initiatives, such as the Employee Incentive Scholarship Program. Competition with the private sector. Officials from two medical centers reported challenges in recruiting and retaining nurses because of competition with private hospitals in the area. Officials from one medical center told us that they face significant competition from local hospitals, as there are multiple private boutique and specialty hospitals in their area. Officials stated that competing with these hospitals, especially for entry-level nurses, is difficult because the hospitals offer generous signing bonuses. Officials from another medical center told us that the high cost of living and lower nursing salaries compared to the salaries offered by competing medical facilities in the area negatively affects the medical center’s ability to successfully recruit and retain nurses, specifically RNs and NPs. Officials from this medical center told us that they do not have sufficient funds, such as funds from VHA’s Education Debt Reduction Program, to offer nurses financial incentives to make up for the large difference in salaries. In addition, while the Choice Act increased the maximum repayment amount for each recipient of the Education Debt Reduction Program from $60,000 to $120,000, VHA officials told us that VHA did not increase the medical center’s annual funding allocation for the program to account for that increase. In FY 2014, this medical center had turnover rates of 10 percent or higher for NPs, RNs, and LPNs, above the national average of 7.9 percent for all nurses. Rural location. Officials from one medical center that has community outpatient clinics located in rural areas reported challenges recruiting qualified nurses with the requisite experience to work in critical care or other specialized units such as mental health. Officials from another medical center located in a rural area reported that, while the medical center receives high interest in nurse employment generally from the community and has a ready applicant pool for some nurses, it also faces challenges in recruiting nurses with advanced degrees or advanced training and expertise to work in the emergency department or intensive care unit because of its rural location. Employee dissatisfaction. Officials from one medical center and its union reported high levels of nurse dissatisfaction with medical center leadership as a result of recent investigations, including by VA’s OIG, examining access to care issues in the facility. This dissatisfaction has negatively affected the medical center’s ability to retain nurses, according to officials from this medical center. In FY 2014, for example, this medical center had a 12 percent turnover rate for NPs and close to a 30 percent turnover rate for NAs. With some nurses on administrative leave and high nurse turnover, officials stated that nurses are stepping into positions temporarily and are being asked to work additional or longer shifts. Officials stated that the medical center’s units are inadequately staffed to care for the medical center’s current patient load, which they believe is affecting access and the quality of care provided to veterans. In addition to challenges identified by the medical centers in our review, VHA also identified a challenge specific to the RN Transition to Practice initiative. Officials from the Office of Nursing Services told us that, when VHA began to require medical centers to offer the RN Transition to Practice initiative in November 2011, VHA did not provide specific funding to medical centers to do so and relied on medical centers to determine how to fund the initiative, which is financially and staff-resource intensive. According to VHA officials, there have been two unintended consequences of requiring medical centers to offer this initiative without VHA funding. First, some medical centers are deciding to hire experienced RNs only, who would not be eligible for the initiative, rather than hiring new RNs because of the financial burden associated with the initiative. Second, some medical centers in rural locations have found it difficult to offer the initiative because of a lack of available instructors qualified to provide the required training. VHA conducts limited monitoring of VA medical centers to ensure they are in compliance with its key nurse recruitment and retention initiatives. Consistent with federal internal control standards, monitoring should be ongoing in the course of normal program operations and provide reasonable assurance of compliance with applicable laws and regulations. VHA’s Office of Academic Affiliations has a system in place for conducting site visits to the medical centers that offer the VA Nursing Academic Partnerships initiative. Office of Academic Affiliations officials reported that the site visits occur at least once per year to gauge a medical center’s adherence to the residency’s policies and contractual requirements. In addition to providing consulting services during these site visits to all medical centers that offer this initiative, these officials also told us that site visit reports are specifically generated for medical centers that are offering the initiative for the first time, and these reports are provided to the nursing school and medical center leadership. Officials told us that they have stopped three medical centers from offering the VA Nursing Academic Partnerships initiative when it was in the pilot phase due to non-compliance with program policies. VHA Healthcare Talent Management officials told us that although they conducted site visits to medical centers in the past that offered the Education Debt Reduction Program, they are currently not conducting site visits. Officials reported that these site visits were in response to a medical center reporting difficulty implementing the initiatives the office manages, and were a method of comprehensively assessing individual medical center’s compliance with policies or guidance, as well as being consultative in nature. A Healthcare Talent Management official reported that the office lacked sufficient staff to enable them to conduct any site visits in FY 2015 and that additional staff have been hired, which will enable the office to resume site visits in FY 2016. In addition, although VHA required VA medical centers, as of November 2011, to offer VHA’s RN Transition to Practice initiative to RNs with 1 year or less of experience, the Office of Nursing Services does not have a process in place to determine if all medical centers are in compliance. We found, for example, that one medical center in our review that employed RNs with less than 1 year of experience had not offered the RN Transition to Practice initiative; officials from this medical center stated that they thought the initiative was recommended and not required. Officials from the Office of Nursing Services told us that, when the RN Transition to Practice initiative became a requirement in November 2011, there was no specific funding provided to medical centers to offer it. Because of this lack of funding, officials said that it has been difficult to provide oversight of this initiative. With limited monitoring taking place as part of its oversight, VHA lacks assurance that its medical centers are complying with the recruitment and retention initiatives’ policies and requirements, and that any problems can be identified and resolved in a timely and appropriate manner. Although three VA medical centers in our review reported that VHA’s key recruitment and retention initiatives for nurses have been helpful, VHA has conducted limited evaluations to determine any needed training resources or to determine the initiatives’ effectiveness system-wide and whether any changes are needed. This lack of evaluation may affect VHA’s ability to improve the initiatives and ultimately medical centers’ ability to recruit and retain nurses. Consistent with federal internal control standards, measuring performance allows organizations to track the progress they are making towards program goals and objectives, and provides managers important information on which to make management decisions and resolve any problems or program weaknesses. According to VHA officials, there are processes in place to determine if problems exist with several of its recruitment and retention initiatives. First, for the first time, in FY 2015, VHA’s Healthcare Talent Management conducted a survey of medical centers as part of the data collection process for VHA’s Interim Workforce and Succession Strategic Plan. The purpose of the survey was to collect information on workforce priorities in the field and to gauge barriers to medical centers as they offer the three recruitment and retention initiatives managed by Healthcare Talent Management. The survey responses provided feedback on some of the barriers that medical centers faced with offering the initiatives, such as an application process for the Education Debt Reduction Program that was not user friendly. Healthcare Talent Management officials said they plan to use these survey results to make changes to the initiatives it manages, and the office plans to continue including questions regarding workforce planning priorities in future surveys. Second, VHA’s Office of Nursing Services is currently conducting a formal evaluation of the RN Transition to Practice initiative. According to an official, the purpose of the evaluation is to gather information on any successes that medical centers have experienced with offering the initiative. As part of the data collection process, the evaluation team has started interviewing program coordinators at selected medical centers, and will analyze available participant survey data. In addition, the evaluation team plans to survey all medical centers to gauge their compliance with the requirement that all medical centers with RNs with 1 year or less of experience offer the initiative. According to officials, the initiative is set to expire in 2016, and VHA will use the information from the evaluation to make decisions and set goals regarding the program moving forward. Lastly, the Office of Academic Affiliations uses various tools to assess nurse residents’ skill competency and satisfaction with the initiatives it manages. For example, it uses an assessment tool to measure nurses’ progress toward the development of core clinical competencies at set intervals throughout their participation in the VA Nursing Academic Partnerships, specifically the Post-Baccalaureate Nurse Residency. The Office of Academic Affiliations also uses a survey to gauge participating students’ satisfaction with its training programs and residencies, including the VA Nursing Academic Partnerships - Graduate Education initiative, on topics such as the learning and working environments, as well as clinical faculty skills. However, VHA has not conducted any assessments of the adequacy of training resources for nurse recruiters. In particular, there are substantial differences in the availability of training resources for nurse recruiters, who can play a key role in medical centers offering VHA’s nurse recruitment and retention initiatives to nurses, according to officials from VHA and representatives of a national nursing organization. According to a VHA official, there is currently no face-to-face training provided by VHA specifically for nurse recruiters, but there is regular training available to those assigned to a human resources office as part of training available to all human resources staff. Representatives of a national nursing organization reported that the clinical nurse recruiters at VA medical centers often feel overwhelmed and unprepared in the position because of a lack of training and human resources-related information, which may have resulted in turnover in that position. VHA officials told us that these differences in training for different types of nurse recruiters have existed for years, but no review of the training provided to nurse recruiters has been conducted. Further, VHA officials told us there are no current plans to assess the differences in the training and the effect that it has on the effectiveness of nurse recruiters. VHA officials reported that the barrier to conducting this type of assessment was resources, both a lack of funding, as well as a lack of staff to conduct the assessment. Furthermore, VHA has not conducted any evaluations of the overall effectiveness of the key initiatives in meeting VHA’s system-wide nurse recruitment and retention goals. In its 2014 Interim Workforce and Succession Strategic Plan, VHA reported that its plan included recruiting highly skilled employees in mission critical occupations, which includes nurses, who are able to function at the top of the competency level, as well as retaining these employees as VHA develops a pipeline of qualified nurses that will take on more senior roles. In addition, VHA reported that it is challenged with ensuring it has the appropriate workforce to meet current and future needs that result from shortages and competition for certain health care positions, such as nurses. For example, 42 percent of VHA’s senior leadership, which includes senior-level nurses, is eligible for retirement in 2015, and this percentage will increase over the next 7 years. The strategic plan noted that VHA has several initiatives, such as the Education Debt Reduction Program, to address some of its recruitment challenges, but does not discuss the effectiveness of this initiative in meeting recruitment goals. VA’s annual report to Congress presents statistical information on some of VHA’s recruitment and retention initiatives, such as the number of nurses that received financial incentives in FY 2014 and the amount of financial incentives paid during that time, but does not provide information on the effectiveness of those initiatives in the recruitment and retention of nurses. VHA officials reported that they hold regular and ad hoc meetings for all offices that manage VHA’s nurse recruitment and retention initiatives to discuss a variety of topics, such as coordination and effectiveness. For example, the Office of Academic Affiliations holds ad hoc meetings with the Office of Nursing Services and Healthcare Talent Management to coordinate their initiatives related to recruitment and retention. In addition, Healthcare Talent Management holds quarterly meetings with the Office of Academic Affiliations and the Office of Nursing Services to share data, coordinate resources, and offer support for the other offices’ programs. Although these offices may meet to discuss the management of the initiatives, VHA officials reported no current plans to evaluate the overall effectiveness of the initiatives in meeting strategic goals. A VHA official noted that the lack of evaluations of the overall effectiveness of VHA’s initiatives is a gap in the organization’s oversight. This official said that the recruitment and retention initiatives for nurses are offered at the local medical center level, and their role has primarily been to provide consultative services to those facilities. VHA officials noted that some data are regularly maintained at the national level, and although they are able to gather limited data on the initiatives from the medical centers, they need to develop a process to evaluate its initiatives to provide better support. Oversight that includes evaluations of individual initiatives, if conducted, could provide VHA with data to identify any resource needs, such as training or administrative needs, and difficulties that medical centers are experiencing offering the initiatives, such as the lack of adequate administrative support as reported to us by medical centers in our review. A system-wide evaluation could help ensure that VHA’s recruitment and retention initiatives are effective in meeting departmental goals and that resources are effectively allocated across all VA medical centers. Evaluation results could also be useful if communicated to relevant stakeholders, such as medical centers, to inform them of any compliance issues or any operational changes that may be needed. Under federal internal control standards, relevant program information and guidance are needed throughout an agency to achieve all of its objectives, and should be communicated to management and others within the organization in a reliable form and within a time frame that enables them to carry out their organizational responsibilities, such as the implementation of a program or policy. Adequate numbers of qualified nurses are essential for VHA to meet its mission of providing quality and timely health care for veterans. As the number of veterans seeking health care increases and the demographics of that population continue to change, VHA faces challenges ensuring it has the appropriate nurse workforce needed to provide care, including more complex, specialized services. In addition, the Choice Act required VHA to add additional clinical staff, including nurses, to its workforce to increase access to care for veterans. VHA has a number of key initiatives to help medical centers recruit and retain nurses; however, challenges, including competition with the private sector for qualified and skilled nurses and the lack of sufficient administrative support, may limit their effectiveness. Furthermore, VHA’s limited oversight of its key nurse recruitment and retention initiatives hinders its ability to assess the effectiveness of these initiatives and make any needed adjustments to help ensure its nurse workforce is keeping pace with the health care needs of veterans. Because of its limited monitoring, VHA lacks assurance that its medical centers are offering recruitment and retention initiatives in accordance with the policies and guidance that it has developed. Further, limited evaluations of medical centers offering VHA’s initiatives have meant VHA is unable to systematically identify problems or needed program changes to ensure that the initiatives are being offered efficiently and effectively, including determining whether medical centers have sufficient training resources to support its nurse recruitment and retention initiatives. Further, without system-wide evaluations of its collective initiatives, VHA is unable to determine to what extent its nurse recruitment and retention initiatives are effective in meeting VHA polices and Choice Act provisions, or ultimately, whether VHA’s initiatives are sufficient to meet veterans’ health care needs. To help ensure the effective recruitment and retention of nurses across VA medical centers, we recommend the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following three actions: 1. Develop a periodic reporting process to help monitor VA medical center compliance with the policies and procedures for each of its key recruitment and retention initiatives; 2. Evaluate the adequacy of training resources provided to all nurse recruiters at VA medical centers to ensure that they have the tools and information to perform their duties efficiently and effectively; and 3. Conduct a system-wide evaluation of VHA’s key nurse recruitment and retention initiatives, to determine the overall effectiveness of these initiatives, including any needed improvements, and communicate results and information in a timely manner to relevant stakeholders. We provided a draft of this report to VA for comment. In its written comments, reproduced in appendix III, VA generally agreed with our conclusions and concurred with our recommendations. In its comments, VA also provided information on workgroups it was planning to establish, as well as its plans for implementing each recommendation, with an estimated completion date of October 2017. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix II: Selected Characteristics of Veterans Health Administration’s (VHA) Key Nurse Recruitment and Retention Initiatives (Number of participating nurses) Initiative Education and training initiatives RN Transition to Practice n/a (n/a) n/a (n/a) n/a (n/a) n/a (n/a) n/a (n/a) 2,856,845 (335) 3,194,961 (349) 3,128,159 (368) 3,675,841 (434) 3,999,113 (475) 16,162,888 (640) 14,829,597 (440) 8,479,674 (260) 10,590,642 (162) 10,950,556 (366) VA Nursing Academic Partnerships – Graduate Education Financial benefits and incentives initiatives Recruitment, retention, and relocation incentives Education Debt Reduction Program n/a (n/a) n/a (n/a) n/a (n/a) n/a (n/a) 427,469 (5) 35,976,421 (6,514) 31,355,259 (5,358) 24,214,577 (5,880) 16,345,604 (2,692) 11,243,725 (1,899) 5,938,084 (1,607) 5,554,648 (1,353) 6,015,672 (1,300) 4,599,492 (961) 3,079,405 (643) 30,965,399 (3,483) 30,006,001 (3,697) 23,353,940 (3,699) 20,701,054 (3,445) 23,806,109 (2,965) n/a (n/a) n/a (n/a) n/a (n/a) n/a (n/a) n/a (n/a) In addition to the contact named above, Janina Austin, Assistant Director; Jennie Apter; Shana R. Deitch; Jacquelyn Hamilton; Kelli A. Jones; Vikki Porter; and Jessica L. Preston made key contributions to this report. | GAO and others have highlighted the need for an adequate and qualified nurse workforce to provide quality and timely care to veterans. VHA faces challenges such as increased competition for skilled clinicians in hard-to-fill occupations such as nurses. As GAO has previously reported, recruitment and retention is particularly difficult for nurses with advanced professional skills, knowledge, and experience, which is critical given veterans' needs for more complex specialized services. GAO was asked to provide information on the recruitment and retention of nurses within VHA. This report reviews (1) the initiatives VHA has to recruit and retain its nurse workforce and (2) the extent to which VHA oversees its nurse recruitment and retention initiatives. GAO reviewed documents and interviewed officials from VHA, four VA medical centers selected to reflect variation in factors such as nurse turnover, and regional offices for these medical centers. GAO used federal internal control standards to evaluate VHA's oversight. GAO also interviewed selected stakeholder organizations. The Department of Veterans Affairs' (VA) Veterans Health Administration (VHA) has multiple system-wide initiatives to recruit and retain its nurse workforce, but three of the four VA medical centers in GAO's review faced challenges offering them. VHA identified a number of key initiatives it offers to help medical centers recruit and retain nurses, which focus primarily on providing (1) education and training, and (2) financial benefits and incentives. VA medical centers generally have discretion in offering these initiatives. The four medical centers in GAO's review varied in the number of initiatives they offered, and three of these medical centers developed local recruitment and retention initiatives in addition to those offered by VHA. GAO also found that while three of the four medical centers reported improvements in their ability to recruit and retain nurses through their offering of VHA's initiatives; they also reported challenges. The challenges included a lack of sufficient administrative support for medical centers, competition with private sector medical facilities, reduced pool of nurses in rural locations with advanced training, and employee dissatisfaction. VHA's oversight of its key system-wide nurse recruitment and retention initiatives is limited. Specifically, GAO found that VHA conducts limited monitoring of medical centers' compliance with its initiatives. For example, in the past, VHA conducted site visits in response to a medical center reporting difficulty with implementation of one of its initiatives, and to assess compliance with program policies, but it is no longer conducting these visits. Consistent with federal internal control standards, monitoring should be ongoing and should identify performance gaps in a policy or procedure. With limited monitoring, VHA lacks assurance that its medical centers are complying with its nurse recruitment and retention initiatives, and that any problems are identified and resolved in a timely and appropriate manner. In addition, VHA has not conducted evaluations of the training resources provided to nurse recruiters at VA medical centers or the overall effectiveness of the initiatives in meeting its nurse recruitment and retention goals, or whether any changes are needed. Consistent with federal internal control standards, measuring performance tracks progress towards program goals and objectives, and provides important information to make management decisions and resolve any problems or program weaknesses. For example, GAO found that VHA does not know whether medical centers have sufficient training to support its nurse recruitment and retention initiatives. In particular, there is currently no face-to-face training provided by VHA specifically for nurse recruiters, but there is regular training available to those assigned to a human resources office as part of training available to all human resources staff. Representatives from a national nursing organization reported that clinical nurse recruiters at VA medical centers often feel more unprepared for the position than those assigned to human resources offices, but no evaluation of this disparity or its effects has occurred. Without evaluations of its collective system-wide initiatives, VHA is unable to determine to what extent its nurse recruitment and retention initiatives are effective in meeting VHA policies and the Veterans Access, Choice, and Accountability Act provisions, or ultimately whether VHA has an adequate and qualified nurse workforce at its medical centers that is sufficient to meet veterans' health care needs. GAO recommends VA (1) develop a process to help monitor medical centers' compliance with its key nurse recruitment and retention initiatives; (2) evaluate the adequacy of training resources provided to nurse recruiters; and (3) conduct a system-wide evaluation of its key nurse recruitment and retention initiatives. VA concurred with the recommendations. |
Since January 2011, DHS has continued to update and strengthen its strategy for how the department plans to address our high-risk designation and resolve its management challenges. In January 2011, DHS provided us with its initial Integrated Strategy for High Risk Management, which summarized the department’s preliminary plans for addressing the high-risk area. The January 2011 strategy, which DHS later updated in June 2011 and December 2011, was generally responsive to the actions and outcomes we identified for the department to address this high-risk area. Specifically, in our March 2011 written response to DHS’s January 2011 update, we stated that the strategy generally identified multiple, specific actions and target completion time frames consistent with the outcomes we identified; designated senior officials to be responsible for implementing most actions; and included scorecards to depict, at a high level, the department’s views of its progress in addressing each high-risk area and a framework for monitoring implementation of corrective actions through, among other things, quarterly meetings between DHS and us. However, the January 2011 update generally did not discuss the root causes of problems. Further, while the strategy identified whether DHS believed it had the resources available to implement planned actions, it did not identify what the specific resource needs were or what additional resources may be needed, making it difficult to assess the extent to which DHS has the capacity to implement those actions. In June 2011, DHS updated its Integrated Strategy for High Risk Management. The update demonstrated the department’s continued leadership commitment to address the high-risk designation and represented continued progress. For example: DHS identified 10 root causes that cut across the four management functions and management integration. By identifying these root causes, the department better positioned itself to determine corrective actions for addressing the underlying problems that have affected its management implementation efforts, and to assess the extent to which progress made in implementing the corrective actions has mitigated those underlying problems. DHS organized its corrective actions into 16 key management initiatives (e.g., financial management controls, IT program governance, and procurement staffing model) to address its management challenges and the 31 actions and outcomes we identified. Identifying key management initiatives should help DHS prioritize its efforts and resources for addressing its root causes and management challenges, and provide a useful framework for monitoring the department’s implementation of the initiatives and associated corrective actions. However, elements of the update could be strengthened or clarified to better address our high-risk criteria and the actions and outcomes we previously identified, including (1) better defining the root causes of its management problems; (2) clarifying the resources available to implement corrective actions; (3) consistently reporting the progress of its corrective actions; and (4) more clearly and consistently reporting the progress of its key management initiatives. DHS provided its most recent update to its strategy in December 2011. Overall, we believe that the December update positions the department to address its management challenges and the implementation and transformation high-risk area. For example: DHS updated its initiatives—removing two initiatives from the management integration area and adding four new initiatives, including human resources information technology, management health assessment, strategic sourcing, and acquisition workforce development; DHS included, for the first time, ratings of the department’s progress addressing the 31 high-risk outcomes; and DHS enhanced its reporting and rating methodology for its key management initiatives. Specifically, DHS replaced a color-coded (green, yellow or red) rating system used in previous updates with a new system for self-reporting progress. DHS now measures and reports its progress addressing the five criteria for removal from high risk in two ways. One way uses standard indicators for measuring progress and a pie graph for reporting such progress across all of its key management initiatives against the first four criteria—leadership commitment, capacity, corrective action plans, and monitoring. The second way uses specific performance measures unique to each initiative for measuring progress and a fuel-type gauge for reporting on the fifth criterion—demonstrated progress. According to DHS, the revised methodology, amongst other things, results in a more objective view of each initiative’s progress. However, the December 2011 update could be strengthened or clarified to better enable DHS and GAO to assess the department’s progress, in the following ways: More clearly and consistently report the resources available to implement corrective actions. DHS identified whether it had sufficient resources to implement most of the corrective actions. However, as we also reported to DHS regarding the January and June 2011 strategies, for many corrective actions DHS did not provide information on what the specific resource needs are or what additional resources may be needed to implement the corrective actions. The absence of resource information makes it difficult to fully assess the extent to which DHS has the capacity to implement these actions, particularly within the time frames identified for the corrective actions Consistently report on corrective actions. DHS provided information on the department’s rationale for eliminating and adding key management initiatives, but has not consistently provided such information for the corrective actions it established for each initiative. For example, the December strategy contained three new corrective actions for the IT program-governance initiative that were not in the June 2011 strategy, but did not include three corrective actions that had been in the June 2011 strategy. The December strategy did not consistently explain the department’s rationale for eliminating or adding corrective actions from the June strategy, such as whether the corrective actions were already completed, or if the corrective actions were no longer appropriate or feasible. Without consistently providing information on the basis for DHS’s decision to add or remove corrective actions, it is difficult for DHS and us to track the status and progress of the department’s efforts to fully implement its management initiatives. Establish measures and report on progress for all initiatives. DHS established a total of 58 measures to track its demonstrated progress in implementing the 18 initiatives included in the December 2011 strategy. While these measures provide additional insight into DHS’s self-reported progress and represent an important improvement from the June 2011 strategy, DHS has not yet established measures for one of its initiatives—the new management health assessment initiative—and did not report on its progress for more than 40 percent (24 of the 58) of the measures in the December 2011 strategy. Without establishing measures and consistently reporting on their progress, neither DHS nor we can fully assess the department’s progress in implementing its initiatives. Stabilize its methodology for measuring progress. We believe that the enhanced methodology DHS established for assessing its progress in implementing its initiatives generally allows for a more- objective assessment. However, the evolving nature of DHS’s methodology, which the department revised in the June 2011 strategy and again in the December strategy, makes it difficult to effectively monitor the department’s progress over time. By strengthening these four aspects, we believe the December 2011 strategy, if implemented and sustained, provides a path for DHS to address our high-risk designation. We will continue to closely monitor and assess DHS’s progress in addressing the high-risk designation and the department’s overall transformation efforts as part of our work for the 2013 high-risk update, which we plan to issue in January 2013. DHS has made progress addressing management challenges and achieving high-risk outcomes in some key areas. The Secretary and Deputy Secretary of Homeland Security, and other senior officials, have demonstrated commitment and top leadership support to address the department’s management challenges. As the following examples illustrate, DHS is making progress achieving the long-term goal of enhancing its management capabilities and building a more-integrated department. In June 2011, we reported that, per departmental acquisition guidance, DHS’s Science and Technology directorate reviewed and approved test and evaluation documents and plans for programs undergoing testing, and conducted independent assessments for the programs that completed operational testing. In October 2011, to enhance the department’s ability to oversee major acquisition programs, DHS realigned the acquisition management functions previously performed by two divisions within the Office of Chief Procurement Officer to establish the Office of Program Accountability and Risk Management (PARM). PARM, which is responsible for program governance and acquisition policy, serves as the Management Directorate’s executive office for program execution and works with DHS leadership to assess the health of major acquisitions and investments. To help with this effort, PARM is developing a database, known as the Decision Support Tool, intended to improve the flow of information from component program offices to the Management Directorate to support its governance efforts. DHS also included a new management initiative in its December 2011 update (strategic sourcing) to increase savings and improve acquisition efficiency by consolidating contracts departmentwide for the same kinds of products and services, and reported awarding 14 strategically sourced contracts in fiscal year 2011. We currently have ongoing work related to both of these areas that we will report on later this year. In February 2012, we reported that the DHS Chief Information Officer (CIO) and Chief Human Capital Officer were coordinating to streamline and consolidate the department’s human resources investments. Specifically, in 2010 and 2011, the DHS CIO conducted program and portfolio reviews of hundreds of IT investments and systems. DHS evaluated portfolios of investments within its components to avoid investing in systems that are duplicative or overlapping, and to identify and leverage investments across the department. DHS also consolidated (1) 6 personnel security–related systems into its departmentwide Integrated Security Management System—with an additional personnel security system planned for consolidation in 2012, and (2) two components’ portals into the Homeland Security Information Network, with plans to consolidate 12 additional portals before 2014. DHS has reduced the number of material weaknesses in internal controls from 18 since the inception of the department in 2003 to 5 in fiscal year 2011. In addition, in fiscal year 2010 DHS committed to the goal of receiving a qualified audit opinion on its consolidated balance sheet in fiscal year 2011 by, for example, remediating financial management issues at the U.S. Coast Guard (USCG). In fiscal year 2011, DHS achieved this goal by moving from a disclaimer of opinion to a qualified audit opinion on its balance sheet and statement of custodial activity for the first time since the department’s creation. In its December 2011 strategy, DHS reported plans to expand the audit to all financial statements in fiscal year 2012. DHS believes this will identify additional areas for corrective action and help it to obtain a clean audit opinion on all financial statements by September 2013, although there is no clear plan for how full auditability will be achieved. In February 2012, we reported that DHS consolidated five time-and- attendance systems into a departmentwide time-and-attendance system and plans to incorporate an additional component by June This consolidation effort is part of DHS’s broader human 2012. resources IT initiative. This initiative is intended to, among other things (1) support the development and implementation of consistent and consolidated human resources IT systems across DHS, and (2) strengthen and unify the department’s ability to collect and share human resource information. We also reported in February 2012 that DHS had initiated a Senior Executive Service Candidate Development Program in May 2011 to build its senior leadership pipeline within the department—consolidating what had been four individual leadership programs into a single DHS-wide program—and lowered its senior leadership vacancy rates from a peak of 25 percent in 2006 to 10 percent at the end of fiscal year 2011.In February 2011, we reported that the department put in place common policies, procedures, and systems within individual management functions, such as human capital, that help to integrate its component agencies.commitment by identifying roles and responsibilities at the departmental level for the key management initiatives it has included in the December 2011 strategy. Additionally, DHS has promoted accountability for management integration among department and component management chiefs by, among other things, having the department chiefs provide written objectives that explicitly reflect priorities and milestones for that management function as well as aligning the component chiefs’ individual performance plans to the department’s goals and objectives. DHS has also demonstrated top leadership In its December 2011 strategy, DHS presented detailed plans to address a number of management challenges. However, in many instances, DHS has considerable work ahead to fully implement these plans and address these challenges. Our prior work has identified challenges related to acquisition oversight, cost growth, and schedule delays, including departmental concerns about the accuracy of cost estimates for some of DHS’s major programs. For example, in June 2010 we reported that over half of the programs we reviewed awarded contracts to initiate acquisition activities without component or department approval of documents essential to planning acquisitions, such as mission need statements outlining the specific functional capabilities required to accomplish DHS’s mission and objectives; operational requirements; and acquisition program baselines. Additionally, we reported that only a small number of DHS’s major acquisitions had validated cost estimates. Further, DHS reported in its December 2011 strategy that senior executives are not confident enough in the data to use the Decision Support Tool developed by PARM to help make acquisition decisions. However, DHS’s plans to improve the quality of the data in this database are limited. At this time, PARM only plans to check the data quality in preparation for key milestone meetings in the acquisition process. This could significantly diminish the Decision Support Tool’s value because users cannot confidently identify and take action to address problems meeting cost or schedule goals prior to program review meetings. DHS continues to face challenges in managing its IT acquisitions, ensuring proper implementation and departmentwide coordination, and implementing information security controls. For example, as we reported in 2011, DHS faces challenges fully defining key system investment and acquisition management policies and procedures for IT. Moreover, the extent to which DHS implemented these investment and acquisition management policies and practices in major IT programs has been inconsistent. We also reported that major IT acquisition programs were not subjected to executive-level acquisition and investment management reviews. As a result, major programs aimed at delivering important mission capabilities had not lived up to their capability, benefit, cost, and schedule expectations. DHS is currently pilot testing a new approach for overseeing and managing its IT acquisitions. We are currently reviewing this new governance approach and expect to report the results of our work later this year. Further, we previously reported on the need for federal agencies, including DHS, to improve implementation of information security controls, such as those for configuring desktop computers and wireless communication devices. DHS reports that, as of December 2011, it mostly addressed IT security. However, the DHS Office of Inspector General continues to report a material weakness in this area and identifies information security as a major management challenge facing the department. Due to material weaknesses in internal controls over financial reporting, DHS was unable to provide assurance that internal controls over financial reporting were operating effectively as of September 30, 2011. According to DHS, due to existing internal control weaknesses and focus on corrective actions, the audit opinion on internal controls over financial reporting will likely remain a disclaimer in fiscal year 2012. DHS also faces challenges in modernizing its financial systems. We previously reported that DHS twice attempted to implement an integrated departmentwide financial management system, but had not been able to consolidate its disparate systems. Specifically, in June 2007, we reported that DHS ended its Electronic Managing Enterprise Resources for Government Effectiveness and Efficiency effort after determining that the resulting financial management systems would not provide the expected system functionality and performance. In December 2009, we reported that the Transformation and Systems Consolidation program had been significantly delayed by bid protests and related litigation. In March 2011, DHS ended this program and reported that moving forward it would consider alternatives to meet revised requirements. In 2011, DHS decided to change its strategy for financial system modernization. Rather than implement a departmentwide integrated financial management system solution, DHS opted for a decentralized approach to financial management systems modernization at the component level. Specifically, DHS reported in its December 2011 strategy that it plans to replace financial management systems at three components it has identified as most in need, including the Federal Emergency Management Agency (FEMA), USCG, and Immigrations and Customs Enforcement (ICE). As of February 2012, DHS officials stated that they first planned to modernize FEMA’s system, which would start using a federal shared service provider at the beginning of fiscal year 2015. DHS officials told us they had not yet identified the specific approach or necessary resources and time frames for implementing new systems at USCG and ICE. It is not clear whether DHS’s new, decentralized approach to financial system modernization will ensure that component’s financial management systems can generate reliable, useful, timely information for day-to-day decision making; enhance the department’s ability to comprehensively view financial information across DHS; and comply with related federal requirements at DHS and its components. We will continue to monitor DHS’s actions in this area. DHS continues to face challenges implementing some of its key human capital initiatives and functions. For example, the DHS Chief Information Officer’s (CIO) September 2011 assessment of the human resources IT program identified two risks that could have adverse effects on the cost and schedule of the program. First, if the program is unable to meet its established baseline schedules, there is a high probability of program breach and potential loss of funding due to lack of prioritization. Second, if a thorough understanding of existing legacy applications and processes across the DHS components is not achieved, the new, consolidated system will not adequately replace existing functionality nor provide the stable operational functionality needed from the program. DHS has also struggled with low job satisfaction among its employees since its inception. For the 2011 Federal Employee Viewpoint Survey, DHS scored below the governmentwide average on the Office of Personnel Management’s Job Satisfaction Index and ranked 31st of 33 federal agencies on employee satisfaction, according to the Partnership for Public Service’s analysis of the survey results. At the subcommittee’s request, we currently have work underway evaluating the effectiveness of DHS’s plans and efforts to address its employee morale issues and expect to report our findings later this year. Further, in June 2011, DHS reported that it was developing component operational plans to implement its departmentwide workforce strategy and align the component plans with the goals, measures, and objectives of the strategy. However, in its December 2011 strategy, DHS reported that it had not finished providing feedback to components on their fiscal year 2011 plans. DHS needs to continue to demonstrate sustainable progress in integrating its management functions within and across the department and its components and take additional actions to further and more effectively integrate the department. Specifically, in its January 2011 high-risk strategy, DHS described plans to establish an Integrated Investment Life Cycle Model (IILCM) for managing investments across its components and management functions; strengthening integration within and across those functions; and ensuring mission needs drive investment decisions. This framework seeks to enhance DHS resource decision making and oversight by creating new department-level councils to identify priorities and capability gaps, revising how DHS components and lines of business manage acquisition programs, and developing a common framework for monitoring and assessing implementation of investment decisions. DHS reported in December 2011 that the IILCM initiative had made little progress since January 2011 though the department planned to begin using the IILCM by the end of September 2012. The department also indicated it had not determined resource needs to accomplish any of the eight associated corrective actions it has identified for this initiative. While DHS has made progress, the department still faces considerable challenges. Going forward, DHS needs to continue implementing its Integrated Strategy for High Risk Management and show measurable, sustainable progress in implementing its key management initiatives and corrective actions and achieving outcomes. We will continue to monitor and assess DHS’s implementation and transformation efforts through our ongoing and planned work, including the 2013 high-risk update that we expect to issue in early 2013. Chairman McCaul, Ranking Member Keating, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For questions about this statement, please contact David C. Maurer at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Maria Strudwick, Assistant Director; Scott Behen, analyst-in- charge; Michael Laforge, Anjalique Lawrence, Gary Mountjoy, Sabine Paul, Nathan Tranquilli, and Katherine Trimble. Other contributors include: David Alexander, Katherine Davis, Jan Montgomery, and Tomas Ramirez, Jr. Key contributors for the previous work that this testimony is based on are listed within each individual product. Information Technology: Departments of Defense and Energy Need to Address Potentially Duplicative Investments. GAO-12-241. Washington D.C.: February 17, 2012. DHS Human Capital: Senior Leadership Vacancy Rates Generally Declined, but Components’ Rates Varied. GAO-12-264. Washington, D.C.: February 10, 2012. Department of Homeland Security: Additional Actions Needed to Strengthen Strategic Planning and Management Functions. GAO-12-382T. Washington D.C.: February 3, 2012. Department of Homeland Security: Progress Made and Work Remaining in Implementing Homeland Security Missions 10 Years after 9/11. GAO-11-881. Washington D.C.: September 7, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Information Security: Federal Agencies Have Taken Steps to Secure Wireless Networks, but Further Actions Can Mitigate Risk. GAO-11-43. Washington, D.C.: November 30, 2010. Department of Homeland Security: Assessments of Selected Complex Acquisitions. GAO-10-588SP. Washington, D.C.: June 30, 2010. Information Security: Agencies Need to Implement Federal Desktop Core Configuration Requirements. GAO-10-202. Washington, D.C.: March 12, 2010. Financial Management Systems: DHS Faces Challenges to Successfully Consolidating Its Existing Disparate Systems. GAO-10-76. Washington, D.C.: December 4, 2009. Department of Homeland Security: Actions Taken Toward Management Integration, but a Comprehensive Strategy Is Still Needed. GAO-10-131. Washington, D.C.: November 20, 2009. Homeland Security: Despite Progress, DHS Continues to Be Challenged in Managing Its Multi-Billion Dollar Annual Investment in Large-Scale Information Technology Systems. GAO-09-1002T. Washington, D.C.: September 15, 2009. Department of Homeland Security: Billions Invested in Major Programs Lack Appropriate Oversight. GAO-09-29. Washington, D.C.: November 18, 2008. Department of Homeland Security: Better Planning and Assessment Needed to Improve Outcomes for Complex Service Acquisitions. GAO-08-263. Washington, D.C.: April 22, 2008. Homeland Security: Departmentwide Integrated Financial Management Systems Remain a Challenge. GAO-07-536. Washington, D.C.: June 21, 2007. Information Technology Investment Management: A Framework for Assessing and Improving Process Maturity, version 1.1. GAO-04-394G. Washington, D.C.: March 2004. High-Risk Series: Strategic Human Capital Management. GAO-03-120. Washington, D.C.: January 2003. Determining Performance and Accountability Challenges and High Risks. GAO-01-159SP. Washington, D.C.: November 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since 2003, GAO has designated the implementation and transformation of DHS as high risk because, among other things, DHS had to combine 22 agencies, while ensuring no serious consequences for U.S. national and economic security. This high-risk area includes challenges in DHSs management functionsfinancial management, human capital, IT, and acquisitions; the effect of those challenges on implementing DHSs missions; and integrating the functions. In November 2000, GAO published criteria for removing areas from its high-risk list. In September 2010, GAO identified 31 actions and outcomes critical to addressing this high-risk area. This testimony addresses DHSs progress in (1) developing a strategy for addressing its high-risk designation and (2) achieving outcomes critical to addressing this high-risk area. This statement is based on GAO products issued from June 2007 through February 2012, including selected updates. It also includes preliminary observations from GAOs ongoing work reviewing DHSs IT governance, for which GAO reviewed documents on IT governance and interviewed officials. The Department of Homeland Security (DHS) has updated and strengthened its strategy for how it plans to address GAOs high-risk designation and resolve the departments management challenges. In January 2011, DHS provided GAO with its Integrated Strategy for High Risk Management, which summarized the departments preliminary plans for addressing the high-risk area. GAO found that this strategy, which was later updated in June and December 2011, was generally responsive to the actions and outcomes needed to address GAOs high-risk designation. For example, the January 2011 strategy generally identified multiple, specific actions and target completion time frames consistent with the outcomes GAO identified. However, the strategy did not address the root causes of problems, among other things. In its June 2011 strategy, DHS, among other things, identified 10 root causes that cut across the management areas and their integration. GAO identified ways the strategy could be strengthened, including consistently reporting the progress of its initiatives and corrective actions. In its most recent update, DHS better positioned itself to address its management challenges. For example, for the first time, DHS included ratings of the departments progress addressing its high-risk outcomes. However, GAO believes that DHS could more consistently report on available resources and corrective actions, establish measures and report on progress made for all initiatives, and stabilize its methodology for measuring progress. These changes, if implemented and sustained, provide a path for DHS to address GAOs high-risk designation. DHS has made progress, but has considerable work ahead to achieve actions and outcomes critical to addressing this high-risk area. Among other accomplishments, DHS realigned its acquisition management functions within a new office to assess the health of major acquisitions and investments; conducted program and portfolio reviews of hundreds of information technology (IT) investments; and reduced the number of material weaknesses in internal controls. DHS also demonstrated top leadership commitment by identifying roles and responsibilities for its key management initiatives. However, DHS has more work ahead to fully implement its plans and address its management challenges. For example, in June 2010 GAO reported that over half of the programs reviewed awarded contracts to initiate acquisition activities without component or department approval of essential planning documents. In addition, DHS faces challenges fully defining key system investment and acquisition management policies and procedures. Further, as of September 30, 2011, due to material weaknesses in internal controls over financial reporting, DHS was unable to provide assurance that these internal controls were operating effectively. In September 2011 we reported that DHS also continues to face challenges implementing some key human capital initiatives, such as its workforce strategy. DHS also needs to continue to demonstrate sustainable progress in integrating its management functions within and across the department and its components, including making progress with its model for managing investments across components and management functions. GAO will continue to assess DHSs efforts to address its high-risk designation and will report its findings on the departments progress in the high-risk update that it expects to issue in early 2013. This testimony contains no new recommendations. GAO has made over 100 recommendations to DHS since 2003 to strengthen the departments management and integration efforts. DHS has implemented many of these recommendations and is in the process of implementing others. |
Through the impartial and independent investigation of citizens’ complaints, federal ombudsmen help agencies be more responsive to the public, including people who believe that their concerns have not been dealt with fully or fairly through normal channels. Ombudsmen may recommend ways to resolve individual complaints or more systemic problems, and may help to informally resolve disagreements between the agency and the public. While there are no federal requirements or standards specific to the operation of federal ombudsman offices, the Administrative Conference of the United States recommended in 1990 that the President and the Congress support federal agency initiatives to create and fund an external ombudsman in agencies with significant interaction with the public. In addition, several professional organizations have published relevant standards of practice for ombudsmen. Both the recommendations of the Administrative Conference of the United States and the standards of practice adopted by various ombudsman associations incorporate the core principles of independence, impartiality (neutrality), and confidentiality. For example, the ABA’s standards define these characteristics as follows: Independence—An ombudsman must be and appear to be free from interference in the legitimate performance of duties and independent from control, limitation, or penalty by an officer of the appointing entity or a person who may be the subject of a complaint or inquiry. Impartiality—An ombudsman must conduct inquiries and investigations in an impartial manner, free from initial bias and conflicts of interest. Confidentiality—An ombudsman must not disclose and must not be required to disclose any information provided in confidence, except to address an imminent risk of serious harm. Records pertaining to a complaint, inquiry, or investigation must be confidential and not subject to disclosure outside the ombudsman’s office. Relevant professional standards contain a variety of criteria for assessing an ombudsman’s independence, but in most instances, the underlying theme is that an ombudsman should have both actual and apparent independence from persons who may be the subject of a complaint or inquiry. According to ABA guidelines, for example, a key indicator of independence is whether anyone subject to the ombudsman’s jurisdiction can (1) control or limit the ombudsman’s performance of assigned duties, (2) eliminate the office, (3) remove the ombudsman for other than cause, or (4) reduce the office’s budget or resources for retaliatory purposes. Other factors identified in the ABA guidelines on independence include a budget funded at a level sufficient to carry out the ombudsman’s responsibilities; the ability to spend funds independent of any approving authority; and the power to appoint, supervise, and remove staff. The Ombudsman Association’s standards of practice define independence as functioning independent of line management; they advocate that the ombudsman report to the highest authority in the organization. According to the ABA’s recommended standards, “the ombudsman’s structural independence is the foundation upon which the ombudsman’s impartiality is built.” One aspect of the core principle of impartiality is fairness. According to an article published by the U.S. Ombudsman Association on the essential characteristics of an ombudsman, an ombudsman should provide any agency or person being criticized an opportunity to (1) know the nature of the criticism before it is made public and (2) provide a written response that will be published in whole or in summary in the ombudsman’s final report. In addition to the core principles, some associations also stress the need for accountability and a credible review process. Accountability is generally defined in terms of the publication of periodic reports that summarize the ombudsman’s findings and activities. Having a credible review process generally entails having the authority and the means, such as access to agency officials and records, to conduct an effective investigation. The ABA recommends that an ombudsman issue and publish periodic reports summarizing the findings and activities of the office to ensure its accountability to the public. Similarly, recommendations by the Administrative Conference of the United States regarding federal ombudsmen state that they should be required to submit periodic reports summarizing their activities, recommendations, and the relevant agency’s responses. Federal agencies face legal and practical constraints in implementing some aspects of these standards because the standards were not designed primarily with federal agency ombudsmen in mind. However, ombudsmen at the federal agencies we reviewed for our 2001 report reflected aspects of the standards. We examined the ombudsman function at four federal agencies in addition to EPA and found that three of them—the Federal Deposit Insurance Corporation, the Food and Drug Administration, and the Internal Revenue Service—had an independent office of the ombudsman that reported to the highest level in the agency, thus giving the ombudsmen structural independence. In addition, the ombudsmen at these three agencies had functional independence, including the authority to hire, supervise, discipline, and terminate their staff, consistent with the authority granted to other offices within their agencies. They also had control over their budget resources. The exception was the ombudsman at the Agency for Toxic Substances and Disease Registry, who did not have a separate office with staff or a separate budget. This ombudsman reported to the Assistant Administrator of the agency instead of the agency head. In our July 2001 report, we recommended, among other things, that EPA modify its organizational structure so that the function would be located outside of the Office of Solid Waste and Emergency Response, whose activities the national ombudsman was charged with reviewing. EPA addresses this recommendation through its placement of the national ombudsman within the OIG, where the national ombudsman will report to a newly-created position of Assistant Inspector General for Congressional and Public Liaison. OIG officials also told us that locating the national ombudsman function within the OIG offers the prospect of additional resources and enhanced investigative capability. According to the officials, the national ombudsman will likely have a small permanent staff but will also be able to access OIG staff members with expertise in specific subject matters, such as hazardous waste or water pollution, on an as-needed basis. Further, OIG officials anticipate that the ombudsman will adopt many of the office’s existing recordkeeping and reporting practices, which could help address the concerns we noted in our report about accountability and fairness to the parties subject to an ombudsman investigation. Despite these aspects of EPA’s reorganization, several issues merit further consideration. First and foremost is the question of intent in establishing an ombudsman function. The term “ombudsman,” as defined within the ombudsman community, carries with it certain expectations. The role of an ombudsman typically includes program operating responsibilities, such as helping to informally resolve program-related issues and mediating disagreements between the agency and the public. Assigning these responsibilities to an office within the OIG would conflict with statutory restrictions on the Inspector General’s activities. Specifically, the Inspector General Act, as amended, prohibits an agency from transferring any function, power, or duty involving program responsibilities to its OIG. However, if EPA omits these responsibilities from the position within the OIG, then it will not have established an “ombudsman” as the function is defined within the ombudsman community. In our April 2001 report, we noted that some federal experts in dispute resolution were concerned that among the growing number of federal ombudsman offices there are some individuals or activities described as “ombuds” or “ombuds offices” that do not generally conform to the standards of practice for ombudsmen. A related issue is that ombudsmen generally serve as a key focal point for interaction between the government, or a particular government agency, and the general public. By placing the national ombudsman function within its OIG, EPA appears to be altering the relationship between the function and the individuals that make inquiries or complaints. Ombudsmen typically see their role as being responsive to the public, without being an advocate. However, EPA’s reorganization signals a subtle change in emphasis: OIG officials see the ombudsman function as a source of information regarding the types of issues that the OIG should be investigating. Similarly, rather than issue reports to complainants, OIG officials expect that the national ombudsman’s reports will be addressed to the EPA Administrator, consistent with the reporting procedures for other OIG offices. The officials told us that their procedures for the national ombudsman function, which are still being developed, could provide for sending a copy of the final report or a summary of the investigation to the original complainant along with a separate cover letter when the report is issued to the Administrator. Based on the preliminary information available from EPA, the reorganization raises other issues regarding the consistency of the agency’s ombudsman function with relevant professional standards. For example, under EPA’s reorganization, the national ombudsman will not be able to exercise independent control over budget and staff resources, even within the general constraints that are faced by federal agencies. According to OIG officials, the national ombudsman will have input into the hiring, assignment, and supervision of staff, but overall authority for staff resources and the budget allocation rests with the Assistant Inspector General for Congressional and Public Liaison. OIG officials pointed out that the issue our July 2001 report raised about control over budget and staff resources was closely linked to the ombudsman’s placement within the Office of Solid Waste and Emergency Response. The officials believe that once the national ombudsman function was relocated to the OIG, the inability to control resources became much less significant as an obstacle to operational independence. They maintain that although the ombudsman is not an independent entity within the OIG, the position is independent by virtue of the OIG’s independence. Despite the OIG’s argument, we note that the national ombudsman will also lack authority to independently select and prioritize cases that warrant investigation. According to EPA, the Inspector General has the overall responsibility for the work performed by the OIG, and no single staff member—including the ombudsman—has the authority to select and prioritize his or her own caseload independent of all other needs. Decisions on whether complaints warrant a more detailed review will be made by the Assistant Inspector General for Congressional and Public Liaison in consultation with the national ombudsman and staff. EPA officials are currently reviewing the case files obtained from the former ombudsman, in part to determine the anticipated workload and an appropriate allocation of resources. According to OIG officials, the national ombudsman will have access to other OIG resources as needed, but EPA has not yet defined how decisions will be made regarding the assignment of these resources. Under the ABA guidelines, one measure of independence is a budget funded at a level sufficient to carry out the ombudsman’s responsibilities. However, if both the ombudsman’s budget and workload are outside his or her control, then the ombudsman would be unable to assure that the resources for implementing the function are adequate. Ombudsmen at other federal agencies must live within a budget and are subject to the same spending constraints as other offices within their agencies, but they can set their own priorities and decide how their funds will be spent. EPA has also not yet fully defined the role of its regional ombudsmen or the nature of their relationship with the national ombudsman in the OIG. EPA officials told us that the relationship between the national and regional ombudsmen is a “work in progress” and that the OIG will be developing procedures for when and how interactions will occur. Depending on how EPA ultimately defines the role of its regional ombudsmen, their continued lack of independence could remain an issue. In our July 2001 report, we concluded that the other duties assigned to the regional ombudsmen—primarily line management positions within the Superfund program—hamper their independence. Among other things, we cited guidance from The Ombudsman Association, which states that an ombudsman should serve “no additional role within an organization” because holding another position would compromise the ombudsman’s neutrality. According to our discussions with officials from the Office of Solid Waste and Emergency Response and the OIG, the investigative aspects of the ombudsman function will be assigned to the OIG, but it appears that the regional ombudsmen will respond to inquiries and have a role in informally resolving issues between the agency and the public before they escalate into complaints about how EPA operates. For the time being, EPA officials expect the regional ombudsmen to retain their line management positions. Finally, including the national ombudsman function within the Office of the Inspector General raises concerns about the effect on the OIG, even if EPA defines the ombudsman’s role in a way that avoids conflict with the Inspector General Act. By having the ombudsman function as a part of the OIG, the Inspector General could no longer independently audit and investigate that function, as is the case at other federal agencies where the ombudsman function and the OIG are separate entities. As we noted in a June 2001 report on certain activities of the OIG at the Department of Housing and Urban Development, under applicable government auditing standards the OIG cannot independently and impartially audit and investigate activities it is directly involved in. A related issue concerns situations in which the national ombudsman receives an inquiry or complaint about a matter that has already been investigated by the OIG. For example, OIG reports are typically transmitted to the Administrator after a review by the Inspector General. A process that requires the Inspector General to review an ombudsman- prepared report that is critical of, or could be construed as reflecting negatively on, previous OIG work could pose a conflict for the Inspector General. OIG officials are currently working on detailed procedures for the national ombudsman function, including criteria for opening, prioritizing, and closing cases, and will have to address this issue as part of their effort. In conclusion, Mr. Chairman, we believe that several issues need to be considered in EPA’s reorganization of its ombudsman function. The first is perhaps the most fundamental—that is, the need to clarify the intent. We look forward to working with Members of the Subcommittee as you consider the best way of resolving these issues. | The Environmental Protection Agency's (EPA) hazardous waste ombudsman was first established within the Office of Solid Waste and Emergency Response as a result of the 1984 amendments to the Resource Conservation and Recovery Act. Over time, EPA expanded the national ombudsman's jurisdiction to include Superfund and other hazardous waste programs managed by the Office of Solid Waste and Emergency Response, and, by March 1996, EPA had designated ombudsmen in each of its 10 regional offices. Although the national ombudsman's activities ranged from providing information to investigating the merits of complaints, in recent years, the ombudsman played an increasingly prominent role through his investigations of citizen complaints. Pending legislation would reauthorize an office of the ombudsman within EPA. In November 2001, the EPA Administrator announced that the national ombudsman would be relocated from the Office of Solid Waste and Emergency Response to the Office of Inspector General (OIG) and would address concerns across the spectrum of EPA programs. Although there are no federal requirements or standards specific to the operation of ombudsman offices, several professional organizations have published standards of practice relevant to ombudsmen who deal with inquiries from the public. If EPA intends to have an ombudsman function that is consistent with the way the position is typically defined in the ombudsman community, placing the national ombudsman within the OIG does not achieve that objective. The national ombudsman, as the position is currently envisioned, still will not be able to exercise independent control over the budget and staff resources needed to implement the function. Prior to the reorganization, the national ombudsman could independently determine which cases to pursue; however, according to EPA, the Inspector General has the overall responsibility for the work performed by the Office, and no single staff member has the authority to select and prioritize his or her own caseload independent of all other needs. Finally, placing the ombudsman in the OIG could also affect the activities of the Inspector General. |
The Servicemembers Civil Relief Act (SCRA), enacted in 2003 as a modernized version of the Soldiers’ and Sailors’ Civil Relief Act of 1940, provides protections for servicemembers to help them meet the unique circumstances they face when serving their country. SCRA provides protections to active-duty servicemembers in the event that their military service prevents them from meeting financial obligations. The act provides mortgage-related protections, including prohibiting foreclosures on active-duty servicemembers’ homes without court orders. In addition, the act provides protections for other types of debt, such as credit cards and vehicle and student loans. During periods of active military service, the act caps interest rates and fees at 6 percent for debt incurred prior to active duty. The interest rate cap covers full-time members of the Army, Navy, Air Force, Marine Corps, and Coast Guard, as well as Reservists, National Guard members, commissioned National Oceanic and Atmospheric Administration officers, and commissioned Public Health Service officers, while they are on active duty. For the purpose of the 6 percent interest rate cap, “interest” is defined to include service charges, renewal charges, fees, or any other charges with respect to an obligation or liability. Any such interest above the 6 percent rate—whether it has yet accrued—is permanently forgiven, and the monthly payment amount going forward must be reduced by the amount of interest forgiven allocable to the period for which payment is made. The interest rate cap has applied to private student loans since 1942 under a provision of the Soldiers’ and Sailors’ Civil Relief Act of 1940, and to federally owned and guaranteed loans since the 2008 reauthorization and amendment of the Higher Education Act of 1965 (HEA). During 2008-2016, interest rates for student loans owned by Education ranged between 3.4 and 7.9 percent, with rates above 6 percent in every year for at least some of these loans, while the most common interest rate for loans guaranteed, but not owned by Education was 6.8 percent, ranging from 5.6 to 8.5 percent. Available data for private student loans show that fixed interest rates charged by 9 major lenders ranged from about 3 to 19 percent, with an average starting interest rate of 7.8 percent (see appendix I for additional information on student loan interest rates). Student loans play a crucial role in ensuring access to higher education for millions of students, and three types of student loans are or have been available through governmental and private sources: federal loans, commercial Federal Family Education Loans (FFEL), and private loans (see table 1). Student loan servicers are banks that service loans they make or nonbanks contracted to handle billing and repayments, inform borrowers about repayment options, and respond to customer inquiries. As of December 31, 2015, Education had contracts with 10 companies to service federal student loans, and all of these companies also service commercial FFEL loans. No new FFEL loans have been made since 2010, but there is about $350.2 billion in outstanding FFEL loans. FFEL loans were originated by private or state entities, but Education has owned some of these loans since 2008 (referred to as federal loans in this report). Those that are still owned by private and state entities are often referred to as commercial FFEL loans (referred to as commercial FFEL loans in this report). The 10 companies that serviced federal loans as of December 2015, also each service some private student loans, which are student loans that are not owned or guaranteed by Education. The SCRA interest rate cap applies to federal, commercial FFEL, and private student loans as long as at least one party to the loan is a servicemember with qualifying military service. Historically, as provided in SCRA, servicemembers had to contact their student loan servicer(s) and provide written notice and proof of an active-duty start date. The loan servicer would apply the cap to any loans that were (a) disbursed prior to active military service, and (b) had an interest rate above 6 percent. However, Education has since implemented new contract requirements for federal student loan servicers to automatically identify eligible servicemembers by matching their borrower files against DOD’s SCRA website to obtain information on active military service. The match identifies servicemembers who are starting active duty, in order for servicers to apply the cap and also those who come off active duty, in order for servicers to remove the cap. Since 2014, student loan servicers have had to match their files for borrowers with federal student loans against DOD’s SCRA website on a monthly basis to determine which borrowers have military service that qualifies them for the cap for federal student loans, and for what periods of time (we will refer to this process as the automatic eligibility check in this report). Education also issued regulations requiring the servicers of commercial FFEL loans to implement the automatic eligibility check as of July 1, 2016. As a result, written notice from servicemembers for the cap is no longer needed for federal or commercial FFEL loans, although servicemembers can still apply in writing. The automatic eligibility check is not required for private student loans (see table 2). The SCRA website can be used by anyone who inputs a borrower’s identifying information to match DOD’s information on a servicemember’s periods of active military service. Users can submit one name for a match, or do batch matches for up to 250,000 names at one time, and the website should provide results within 24 hours, according to DOD. Servicers also have to notify each servicemember who receives the cap within 30 days and retain a record of the SCRA website match in the servicemember’s file (see fig. 1). DOD, Education, CFPB, and DOJ each play a role in overseeing servicemembers’ student loans. DOD, Education, and DOJ also have roles with respect to SCRA’s interest rate cap for servicemembers’ student loans. Under SCRA, the DOD secretaries of each military branch have primary responsibility for ensuring that servicemembers receive notification of and information on their SCRA benefits, including for student loans. In addition, DOD maintains the SCRA website, which is considered the official source of servicemembers’ active-duty status for complying with SCRA. Education’s Office of Federal Student Aid administers federal student aid programs, including loans, and oversees the performance of contracted student loan servicers that handle billing and other administrative tasks. It also has some responsibility for monitoring commercial FFEL student loan servicers since the loans are federally guaranteed. The Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd- Frank Act) established CFPB and provided it with authority to supervise certain student loan lenders and servicers with respect to the enumerated federal consumer financial laws as defined by the Dodd-Frank Act. Specific to student loans, CFPB also collects borrower complaints about private student loans and has an internal focus, through its Ombudsman and Office of Servicemember Affairs, on issues affecting private student loan borrowers, including servicemembers and their families. However, because SCRA is not included in the enumerated federal consumer financial laws transferred to CFPB for oversight, the bureau does not have specific oversight of SCRA compliance for private student loan lenders and servicers. The four federal financial regulators for banks—Federal Deposit Insurance Corporation, Board of Governors of the Federal Reserve System, National Credit Union Administration, and the Office of the Comptroller of the Currency—oversee SCRA compliance in connection with student loan lending and servicing by banks, credit unions, institution-affiliated parties as defined in the Federal Deposit Insurance Act, and in certain instances third-party service providers under the Bank Service Company Act. DOJ has the specific authority to enforce SCRA and accepts referrals from other federal agencies, including CFPB, for possible SCRA violations. The number of servicemembers with federal student loans who received the SCRA interest rate cap increased dramatically when Education required servicers of these loans to use an automated process to identify qualifying servicemembers and apply the cap to their eligible loans. In particular, Education in 2014 required that federal loan servicers use the SCRA website each month to identify borrowers with qualifying periods of active military service and automatically apply the rate cap to their eligible federal loans, rather than requiring servicemembers to provide written notice of eligibility. When some federal loan servicers began using the SCRA website, they identified servicemembers with active military service start dates as early as 2008, when the SCRA rate cap first applied to federal loans. These servicers then applied the cap to servicemembers’ eligible federal loans retroactively for those qualifying periods of service. As a result, some servicemembers were eligible to receive the rate cap for months going forward and also retroactively for months in which they were previously eligible. Recently, in response to findings from a report issued by Education’s Office of Inspector General, the agency required federal loan servicers to identify all servicemembers with qualifying military service back to August 2008, including those that were not identified in the initial website checks, and automatically apply the interest rate cap retroactively to their eligible federal loans by December 2016. Data from the federal loan servicers show that the number of servicemembers who received the SCRA interest rate cap on their federal student loans for December 2015 was dramatically higher than for October 2008, with the increase driven by federal loan servicers using the SCRA website to identify qualifying servicemembers and automatically apply the cap to their eligible federal loans (see fig. 2). (For additional information on servicemembers who received the cap, see appendix II.) The sidebar illustrates the potential financial impact of the interest rate cap on an individual servicemember’s federal student loan. Some of the servicers had sought to implement the automatic website checks earlier. For instance, in June 2011, four trade associations representing student loan servicers jointly requested permission from Education to conduct SCRA website eligibility checks. Education denied the request, stating that the act specifically required servicemembers to apply in writing for the rate cap. Then DOJ, in its 2014 settlement with a federal loan servicer for potential violations, interpreted SCRA as allowing the servicer to use the SCRA website to confirm borrowers’ eligibility and to ascertain active-duty begin and end dates. Education officials said because SCRA is under DOJ’s enforcement jurisdiction, they deferred to DOJ’s decision to allow the servicers to use the SCRA website. The number of servicemembers with commercial FFEL loans who received the SCRA interest rate cap also increased after most federal student loan servicers began using the SCRA website to automatically identify borrowers with active military service qualifying them for the rate cap and apply it to their eligible commercial FFEL loans. Loan servicers were not required to use the website checks for commercial FFEL loans until July 1, 2016, and could instead require written notice from servicemembers until that date. But Education authorized and encouraged servicers to use the automatic website checks for commercial FFEL loans when it required them for federal student loans in 2014. When some federal loan servicers began using the SCRA website for borrowers with commercial FFEL loans, they identified servicemembers with active military service start dates as early as 2008, when the SCRA rate cap first applied to commercial FFEL loans, and applied the cap retroactively for those qualifying periods of service. As a result, some servicemembers were eligible to receive the rate cap on a commercial FFEL loan for months going forward and also retroactively for months in which they were previously eligible. According to data from the federal loan servicers, who serviced at least 77 percent of the outstanding balance of commercial FFEL loans as of December 31, 2015, the number of servicemembers who received the SCRA interest rate cap on their commercial FFEL loans for December 2015 was much higher than the number who received the cap for October 2008, with the increase mostly driven by federal loan servicers using the SCRA website to identify qualifying servicemembers and automatically apply the cap to their eligible commercial FFEL loans (see fig. 3). (For additional information on servicemembers who received the cap, see appendix II.) Data were not readily available on the remaining share of the commercial FFEL market serviced by non-federal loan servicers. In addition, the number of outstanding commercial FFEL loans decreased from 2008 to 2015, primarily due to the termination of Education’s authority to make or insure FFEL loans, effective July 1, 2010. Although the automatic eligibility check for the SCRA interest rate cap is not required for private student loans, the number of servicemembers who received the cap on at least one of their private loans increased for 6 lenders who have voluntarily implemented automatic SCRA website checks. The number of servicemembers who received the cap on at least one of their private student loans more than doubled, from 14,970 to 33,309 from September 2014 to March 2016, according to data from 6 lenders who together accounted for 65 percent of the private student loan market. According to the 6 lenders, the servicers they use for their private student loans all began voluntarily using the SCRA website to check for eligible servicemembers as of June 2015. Because limited data are available on loan servicing for the private student loan market not covered by the 6 lenders (about 35 percent), it is unclear how many servicemembers are receiving the cap for any private loans handled by the other servicers. Consumer information provided to servicemembers about the SCRA interest rate cap is sometimes inaccurate. Under SCRA, the military branch secretaries have primary responsibility for ensuring that servicemembers receive pertinent information on their SCRA benefits. However, a key source of information for servicemembers, DOD’s Military OneSource website, incorrectly states that the SCRA cap does not apply to commercial FFEL student loans incurred prior to active military service. Further, DOD provided us with 8 documents it uses to inform servicemembers, military legal aid attorneys, officials who examine financial institutions, and others about servicemembers’ SCRA benefits. Our review of these documents, prepared by various offices and branches within DOD, found that all contained significant inaccuracies. For example, 6 of the 8 documents specify that the servicemember must provide written notice to obtain the cap and provide military orders, and 3 of those 6 say that the request must be made within 180 days of the end of military service—requirements that no longer apply to federal or commercial FFEL loans. In addition, 2 of the 8 documents specifically state that the SCRA rate cap does not apply to student loans. Inaccurate information could be especially problematic for servicemembers with private student loans because, unlike for federal and commercial FFEL loans, servicers of private student loans are not required to automatically identify and apply the cap to borrowers’ eligible loans. In acknowledging these inaccuracies, DOD officials told us that the department needs to work across functional areas and military installations to ensure that DOD information on the SCRA rate cap is correct and uniform for all servicemembers. DOD recently combined many of its financial readiness initiatives under one office, which officials said they expect to result in more accurate and consistent SCRA materials across military installations, but the effect of this consolidation is not yet known. Federal internal control standards state that agencies should externally communicate the information necessary to achieve their objectives, but inaccurate information about the SCRA rate cap not only falls short of those standards, it may also prevent some servicemembers who have private student loans from receiving a benefit for which they are eligible. There is a risk that not all servicemembers who are eligible for the SCRA interest rate cap are receiving it on their private student loans for several reasons. First, the automatic eligibility check is not required for private loans, so the extent to which the rate cap is being applied to servicemembers’ private student loans is unknown. The private student loan market comprised approximately $102 billion in student loans, about 7.6 percent of the total student loan market, as of March 31, 2016. The 6 lenders of private student loans for whom we received data are voluntarily identifying eligible servicemembers and providing the rate cap through the automatic eligibility check. Our analysis of data from these 6 lenders showed that the number of borrowers receiving the SCRA rate cap on private student loans more than doubled once these servicers started using the automated process. These lenders’ loans represent about 65 percent of the private student loan market, or about $66 billion as of March 2016. Information on the remaining 35 percent of the private student loan market, or about $36 billion as of March 2016, was not available with regard to whether the lenders or servicers use the automated process. Second, representatives from the four veteran service organizations we spoke with told us that the SCRA interest rate cap is not well publicized and that many servicemembers are not aware of the benefit. One representative said that while implementation of the automatic eligibility checks has been relatively smooth for federal and commercial FFEL loans, it has caused confusion because of the different requirement to obtain the same rate cap for private loans. Although SCRA states that during active military service the interest rate of a servicemember’s debt shall not exceed 6 percent, those servicemembers with private student loans are at risk of not receiving the cap if they do not know how to obtain it and their loan servicer does not automatically apply it. There are also indications that private student loans may be more susceptible to SCRA rate cap violations than other types of loans. In DOJ’s 2014 settlement with one of the federal student loan servicers, nearly three-fourths of DOJ’s settlement of borrower claims—about $45 million—went to borrowers with private student loans, while just one- fourth went to servicemembers with federal loans or commercial FFEL loans. In 2015, Education, the CFPB, and the Department of the Treasury issued a joint policy stating that common student loan servicing functions should be consistent for all student loans. However, without a requirement that loan servicers automatically identify eligible servicemembers and apply the interest rate cap for private student loans, servicemembers with private loans may not be receiving the interest rate cap for which they are eligible in a manner consistent with those who have federal and commercial FFEL loans. DOJ has proposed changes to SCRA’s statutory language to explicitly allow a servicemember’s oral rather than written notice of eligibility to obtain the interest rate cap. Additionally, the proposal would eliminate the provision that servicemembers provide the creditor a copy of their military orders. While these changes, if enacted, would remove the burden on servicemembers with private loans of having to provide written notice of their military status, they would still need to be aware of the cap and orally provide notice of their status to their loan servicer. Alternatively, requiring the automatic eligibility check for private student loans would better ensure that all servicemembers with private student loans are receiving a benefit for which they are eligible and that the interest rate cap is applied consistently across all types of student loans. Education, as part of its oversight of federal student loan servicers, uses a variety of mechanisms to oversee how servicers apply the SCRA interest rate cap for federal student loans and commercial FFEL loans held by eligible servicemembers. However, Education does not systematically track SCRA-specific borrower complaints. Education has established several mechanisms to oversee whether federal student loan servicers are properly applying and removing the rate cap for federal and commercial FFEL student loans: Reviews of federal loan servicers: In February 2015, Education indicated it created a loan servicer monitoring group of 10 monitors and 1 supervisor to oversee servicing activity for federal student loans. This group reviews borrower accounts, and implementation of the SCRA rate cap is one of its core functions. The first set of reviews in 2015 found that 332 of the 335 borrower accounts reviewed had correctly processed rate caps under the automatic eligibility check, according to the review reports. Servicers did not correctly process 3 accounts: in 2 cases borrowers were not notified that their rate had been reduced, and in 1 case the rate reduction was applied retroactively to a date prior to when the loan was eligible for the cap. The second set of reviews was completed in April 2016, and the review reports show that all 300 reviewed borrower accounts had the rate cap properly applied, had correct beginning and end dates (if applicable), and had borrowers who were notified about application of the cap. Data reporting: Federal student loan servicers provide Education with monthly reports of all federal student loan borrowers who fall into a “Service Members category,” which includes servicemembers who are SCRA eligible. In addition, servicers report weekly to Education the date they applied the rate cap for each servicemember who is receiving the cap, the date the cap was ended, and the date the servicemember was notified of the cap. Education is using this information to track the number of borrowers who have military service qualifying them for the rate cap. Technical assistance for loan servicers: In early 2015, when working with loan servicers to implement the automatic eligibility check for federal loans, Education hosted conference calls with the servicers and reached out to DOD for clarification when needed. Representatives of 7 of the 10 federal student loan servicers said such calls were useful, and two also noted that they also have a liaison at Education whom they can contact about questions or challenges. According to Education officials, the agency is continuing to provide technical assistance on the website match, such as working on updated guidance for servicers to help resolve non-matches due to marital or hyphenated last names. In addition, Education officials explained that staff listen to a sample of incoming and outgoing calls between servicers and servicemember borrowers, to ensure accuracy and completeness of provided information. Feedback is provided to servicers through meetings and a written report, and servicers make required changes or provide employee training to correct deficiencies, Education officials said. Agency-wide oversight board: Education has an internal board comprised of staff from different units and groups within Education’s Office of Federal Student Aid (e.g., Chief Financial Officer, Program Compliance Staff) that has met periodically since 2012 to coordinate loan servicer monitoring activities. The agenda for the October 2014 monthly meeting included implementation of the automatic eligibility check for federal loans. Program compliance reviews: Education included implementation of the automatic eligibility check as part of its review of the loan servicers that had an annual compliance review scheduled in 2015 or 2016 to look at how the federal student loan servicers were implementing the revised automatic eligibility check. In the three review reports provided to GAO, one servicer did not properly apply the SCRA rate cap for a reservist who had been called to active duty military service. No other SCRA-related issues were identified for these three servicers in the reports. In addition to these efforts, Education tracks and resolves borrower complaints about federal and commercial FFEL student loans it receives directly, but does not track the number of SCRA-specific complaints. Education officials said they generally receive complaints from borrowers who believe their issue was not satisfactorily resolved by a servicer or who are reluctant to submit a complaint to a servicer. Until July 1, 2016, Education recorded borrower complaint information through a dozen different systems within the agency, including the Ombudsman’s complaint tracking system and the Office of Program Compliance’s Complaint Tracking System. While borrowers could contact Education with SCRA-related complaints, officials said the agency did not specifically track the number of SCRA complaints it received across all its complaint systems. In March 2015, the President signed a Student Aid Bill of Rights which required Education by July 1, 2016 to develop and implement a simple process for borrowers to file complaints and for Education to track their resolution. The agency completed design of the new system and started using it in July 2016. While the system is designed to generate more robust, standardized information on borrower complaints, Education officials said it does not systematically track SCRA-related complaints. Under the new system, when complainants enter their contact information online, they can choose to self-identify as “a servicemember or veteran” using a check box, as appropriate, but this is an optional data field. There are also two options in a drop-down menu categorizing complaint type that Education officials said are appropriate for SCRA-related complaints: “Military and Veteran Benefits” and “Loan Interest Rate,” but neither is specific to SCRA complaints. In addition, there is a blank text box in which complainants could opt to specify that their issue is SCRA related. Education officials said they could determine the volume of SCRA complaints by running a report on complaints with “SCRA” in the text box. However, this approach relies on borrowers to enter SCRA in the text box. Also, not all servicemembers may know how to refer to the rate cap benefit by its actual name or acronym. While Education has taken steps to improve and streamline how it tracks borrower complaints, without a way to systematically identify SCRA- related complaints Education will not be able to evaluate the effectiveness of its efforts to oversee these benefits or be able to use this information to improve service to borrowers who are servicemembers. For example, the agency will not know if the frequency of SCRA complaints for commercial FFEL loans decreased as servicers implemented the automatic eligibility check for these loans. In addition, federal internal control standards state that an agency’s management should analyze and discuss information related to achieving agency goals. One of Education’s strategic objectives is to provide superior service and information to borrowers, and not having information on the extent of SCRA-related complaints can hinder the agency’s ability to provide this level of customer service. In addition, Education has identified having a unified borrower complaint tracking system as a key mechanism to enhance oversight and customer service. However, Education will not be able to fully respond to servicemembers’ SCRA-related issues without knowing details about or frequency of these types of complaints. The current roles of CFPB, the four federal financial regulators, and DOJ with regard to SCRA compliance result in an oversight gap because no agency is currently authorized to routinely oversees SCRA compliance for private student loans made or serviced by nonbank lenders and servicers (see fig. 4). By routine oversight, we are referring to onsite reviews to specifically look for instances of noncompliance with SCRA by nonbank private student loan lenders and servicers. Nonbank lenders include institutions of higher education and private companies that are not banks. According to CFPB, the nonbank student loan servicers include the seven largest student loan servicers. While the proportion of private student loans that are made or serviced by nonbanks is unknown, as of March 2016, the total private student loan market had an outstanding balance of about $102 billion, and available data show that interest rates on such loans can range from about 3 to 19 percent. While Education routinely monitors the application of the SCRA cap for federal and commercial FFEL loans, it does not do so for any private loans serviced by these companies. As part of its oversight responsibility, CFPB collects borrowers’ student loan complaints and reports on issues facing private student loan borrowers, including military borrowers, through its Ombudsman and Office of Servicemember Affairs. CFPB also oversees the student loan lending and servicing operations of large banks and their affiliates (large bank lenders and servicers), nonbanks that make private student loans (nonbank lenders), and certain large nonbank student loan servicers (nonbank servicers) for compliance with certain federal consumer financial laws, through onsite examinations. However, because SCRA is not one of the federal consumer financial laws whose oversight was transferred to CFPB, the bureau does not have the authority to routinely oversee SCRA compliance by student loan lenders and servicers, and does not inspect SCRA compliance during its onsite reviews. The four federal financial regulators—Federal Deposit Insurance Corporation, Board of Governors of the Federal Reserve System, National Credit Union Administration, and Office of the Comptroller of the Currency—have broad authority to review the banking activities of regulated entities’—insured depository institutions, credit unions, and their affiliates—including compliance with certain applicable laws, such as SCRA. According to agency officials, they also have enforcement authority over institution-affiliated parties, and in certain instances, over third-party service providers, including reviewing compliance with the SCRA rate cap for student loans. However, the regulators do not have the authority to routinely oversee whether nonbank student loan lenders or servicers comply with the SCRA rate cap for private student loans. DOJ has specific authority to enforce SCRA based on referrals of possible violations, but does not conduct onsite reviews of student loan lenders or servicers to check for SCRA compliance, including lenders and servicers of private student loans. According to DOJ officials, they ask servicemembers to first go through DOD’s military legal assistance system to resolve SCRA complaints, including those concerning private student loans. If the military system is unable to resolve the complaint, it is referred to DOJ, which may open a formal investigation and/or file a lawsuit. However, referrals from DOD would only capture cases in which a servicemember knew about the rate cap for private student loans and believed that it had been erroneously denied. Currently, because of the statutorily defined roles of each agency, no agency can routinely oversees SCRA compliance by certain nonbank private student loan lenders and servicers by, for example, conducting onsite file reviews. Because of this, there is a risk that SCRA violations for private student loans will go undetected, potentially leaving some servicemembers without a benefit for which they are eligible. While CFPB has written agreements in place with the four federal financial regulators and with DOJ for interagency coordination related to oversight, these agreements do not provide for routine SCRA oversight for private loans connected with nonbank lenders and servicers. This may be because none of these agencies currently has the authority to routinely oversee SCRA compliance at nonbank private student loan lenders and servicers. Because CFPB, the federal financial regulators, and DOJ each have a role in the oversight of either private student loans or the SCRA rate cap, any of these agencies could potentially be in a position to assume responsibility for the oversight of SCRA compliance by nonbank private student loan lenders and servicers. However, additional statutory changes may be required to give these, or any other agencies, additional authority to conduct routine oversight of SCRA compliance at nonbank private student loan lenders and servicers. CFPB and DOJ could communicate this need to the Congress. In prior work, we developed a framework for crafting and assessing an effective financial regulatory system that stated such systems should be appropriately comprehensive, and cover all activities that pose risks to consumer protection, including closing any gaps in oversight. The lack of routine oversight of nonbank private student loan lenders and servicers related to SCRA compliance increases the risk that some servicemembers will not receive a benefit for which they are eligible, even though SCRA was designed to provide financial protection for all active- duty servicemembers. The number of servicemembers receiving the SCRA interest rate cap for their federal and commercial FFEL student loans has greatly increased since student loan servicers began using an automatic eligibility check to identify those who are eligible. Nonetheless, some servicemembers continue to face challenges in obtaining the cap. They receive inaccurate SCRA information from DOD. In addition, if servicemembers have private student loans, their loan servicer is not required to use the automatic eligibility check to identify them. As a result, servicemembers are at risk of not always getting the rate cap benefit for which they are eligible. If DOJ were to update its proposed SCRA changes by requiring private loan servicers to use the automatic eligibility check to identify eligible borrowers, this could lead to legislative action that would provide consistent treatment for all eligible servicemembers regardless of type of student loan. Education’s new borrower complaint system simplifies the complaint process for consumers, but lacks the ability to track SCRA complaints systematically. Without a systematic way to track complaints about the rate cap, Education will not be certain whether servicemembers continue to experience problems, making it difficult for Education to meet its strategic goal of providing superior service. Oversight of compliance with SCRA is also dispersed across multiple agencies, each of which has limitations on its authority. Consequently, there is no routine oversight of SCRA compliance for nonbank private student loan lenders and servicers. The resulting lack of routine oversight for nonbank lenders and servicers increases the risk that SCRA violations for private student loans will go undetected and that some servicemembers may not benefit from the rate cap for which they are eligible. 1. To help ensure quality information is conveyed to servicemembers about how the Servicemembers Civil Relief Act (SCRA) interest rate cap applies to student loans, we recommend the Secretary of Defense direct the secretaries of each service branch, and work with other secretaries as appropriate, to ensure that all information about the SCRA interest rate cap for student loans is accurate when provided to servicemembers and to those who work with servicemembers to help them obtain SCRA benefits, including information contained in outreach materials. 2. To ensure that all eligible servicemembers with student loans receive the SCRA interest rate cap, we recommend the Attorney General direct the Department of Justice to consider modifying its proposed changes to SCRA to require use of the automatic eligibility check for private student loans. 3. To enhance customer service, we recommend the Secretary of Education direct the Office of Federal Student Aid to identify ways to modify the data collected in its unified borrower complaint system to allow the agency to more precisely identify and analyze complaints specifically about the SCRA interest rate cap. 4. To better ensure that servicemembers with private student loans benefit from the SCRA interest rate cap, we recommend that the Director of the Consumer Financial Protection Bureau and the Attorney General of the Department of Justice coordinate with each other, and with the four federal financial regulators, as appropriate, to determine the best way to ensure routine oversight of SCRA compliance for all nonbank private student loan lenders and servicers. If CFPB and DOJ determine that additional statutory authority is needed to facilitate such oversight, CFPB and DOJ should develop a legislative proposal for Congress. We provided a draft copy of this report to DOD, Education, CFPB, DOJ, and the four federal financial regulators for review and comment. We also provided relevant report sections to the federal student loan servicers and the private-sector company representing the 6 private lenders, for technical comments. DOD’s comments are reproduced in appendix III, Education’s in appendix IV, CFPB’s in appendix V, and DOJ’s are in appendix VI. Of the four federal financial regulators, the National Credit Union Administration provided formal comments, which are reproduced in appendix VII, while the other three other regulators provided technical comments. We incorporated technical comments in the report as appropriate. In its written comments, DOD disagreed with our recommendation, saying it was unnecessary because the department was already providing accurate information about the SCRA interest rate cap for student loans. DOD said that information provided in 6 of the 8 documents GAO reviewed is accurately based on statute whereas Education’s updated requirement to automatically apply the cap is based on policy which could change in the future. Moreover, the automated process applies only to federal and commercial FFEL student loans in contrast to other types of debt. DOD said that providing information based on statute rather than policy would cause less confusion and was a better approach than what we recommend. With regard to Education’s automated process, we note that Education formalized this approach through federal regulations that became effective as of July 2016, which currently legally requires servicers to use this process for all federal and commercial FFEL loans. In addition, DOD said it was unable to verify whether DOD’s Military OneSource website still inaccurately states that the SCRA rate cap does not apply to commercial FFEL loans. However, when we searched the website using the term “SCRA,” we still found this inaccuracy as of October 26, 2016. DOD said it would look into a means of verifying website information, but in the meantime, it is satisfied that its training provides correct information. Given that Military OneSource is a key source of information for servicemembers and that, as we note in this report, 2 of the 8 documents DOD provided state that the SCRA rate cap does not apply to student loans, we continue to believe that servicemembers are not always receiving accurate and up-to-date information about how the SCRA rate cap applies to student loans. In its written comments, Education said it is committed to accurately tracking the types of complaints it receives. While Education believes its new feedback system and monitoring efforts allow it to identify SCRA- related issues by conducting keyword searches using a variety of terms, the agency said that it will respond to GAO’s recommendation by creating a complaint subcategory specifically for SCRA under the “Military and Veterans Benefits” category. Education added that it had not previously created a specific complaint category for SCRA given that military members may be unfamiliar with the term. We share Education’s concern in ensuring complaints are captured regardless of whether the service member has knowledge of the SCRA term. However, as we point out in this report, one of the challenges with Education’s keyword search process is that it may miss relevant loan complaints where SCRA is not specifically mentioned. In implementing our recommendation by creating a subcategory for SCRA complaints, we encourage Education to use simple language to describe what an SCRA complaint is, rather than rely on the servicemember’s knowledge of the acronym. We believe doing so would help servicemembers note concerns related to their student loan interest charges and active duty status, and allow Education to appropriately capture this information and resolve any issues. In its written comments, CFPB did not specifically agree or disagree with our recommendation to provide oversight of SCRA compliance among nonbank private lenders and servicers, but acknowledged that it shares GAO’s interest in maximizing SCRA protections. CFPB said that while it will continue its strong collaboration with relevant federal agencies, existing interagency agreements could not be used to coordinate oversight of SCRA compliance because these agreements would not create the statutory authority CFPB would need to oversee SCRA. Based on CFPB’s comments, we modified the recommendation by removing the suggestion that oversight could be accomplished by updating interagency coordination agreements. Although CFPB does not currently have authority over nonbank lenders and servicers, the bureau said it has other tools at its disposal, including sharing information with other agencies about SCRA-related complaints or possible violations. We believe, however, that referring complaints does not take the place of regular oversight and that CFPB and DOJ can take additional steps toward that end. We therefore encourage both agencies to work together to determine the best way to ensure routine oversight of SCRA compliance for all nonbank private student loan lenders and servicers, and to consider developing a legislative proposal seeking additional statutory authority, as needed, should they determine that step to be necessary. In its written comments, DOJ neither agreed nor disagreed with our second recommendation and agreed with our fourth recommendation. With respect to requiring use of the automatic eligibility check for private student loans, DOJ said that its current package of proposed legislative changes provides benefits to servicemembers with all kinds of loans, including private student loans. Rather than requiring servicemembers to submit written notice and a copy of military orders, they need only give oral or written notice of eligibility for the cap to their creditors. However, as we stated in the report, servicemembers with private student loans would still need to be aware of the rate cap in order to give notice, whether written or oral. Therefore, we encourage DOJ to consider updating its current proposal to require use of the automatic eligibility check by all student loan lenders and servicers. Not only would this ensure that servicemembers with private student loans receive a benefit for which they are eligible, but also that the interest rate cap is applied consistently across all types of student loans. With respect to coordinating oversight with CFPB, and the four federal financial regulators, as appropriate, DOJ agreed with this recommendation and believes the agency is in full compliance; therefore they believe the recommendation should be closed. DOJ said the agency already coordinates extensively with CFPB and the financial regulators concerning SCRA compliance, through such mechanisms as referrals from CFPB for any SCRA-related violations and access to its consumer complaint database, and that it will continue to build upon them. While these mechanisms are commendable, we believe they do not constitute exercising routine oversight of nonbank private student loan lenders and servicers who are not affiliated with a depository institution. Therefore, we cannot close the recommendation as DOJ suggests. We believe that additional interagency coordination, including working with CFPB to seek additional statutory authority, as needed, is necessary to ensure routine SCRA compliance. In its written comments on our fourth recommendation, the National Credit Union Administration (NCUA), one of the four federal financial regulators, agreed that additional interagency coordination around SCRA compliance for private student loans would be useful, especially for certain third-party service providers who may lend money for or service private student loans. NCUA noted that it does not currently have authority to provide routine oversight of SCRA compliance by these entities. In its technical comments, the Federal Deposit Insurance Corporation proposed that GAO delete the fourth recommendation. In their view, oversight of nonbank private student loan lenders and servicers cannot be accomplished by updating existing interagency agreements since the agency lacks statutory authority to oversee these entities. As noted earlier, we modified the recommendation accordingly to address a similar comment received from CFPB. We underscore the need for CFPB and DOJ to work together to ensure the routine oversight of SCRA compliance for all nonbank student loan lenders and servicers, which they may determine requires them to seek additional statutory authority, to ensure that all eligible servicemembers receive the interest rate cap for their private student loans. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, Secretary of Education, Director of the Consumer Financial Protection Bureau, Attorney General of the United States, Chairman of the Federal Deposit Insurance Corporation, Chair of the Board of Governors of the Federal Reserve System, Chairman of the National Credit Union Administration, Comptroller of the Currency, and to other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or emreyarrasm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. This report examines: (1) How many servicemembers have received the Servicemembers Civil Relief Act (SCRA) interest rate cap for their student loans? (2) what challenges do servicemembers face obtaining the cap? and (3) to what extent do federal agencies oversee implementation of the cap? We used multiple methodologies to conduct this study. To obtain information on servicemembers receiving the SCRA cap, we used data from the 10 federal student loan servicers, the Department of Education (Education), and six private student lenders. To identify challenges servicemembers face obtaining the cap and assess the extent to which federal agencies oversee implementation of the rate cap, we interviewed officials and reviewed documentation from the Department of Defense (DOD), Education, the Consumer Financial Protection Bureau (CFPB), and the Department of Justice (DOJ). We also interviewed representatives of the 10 student loan servicers who are contracted by Education to service federal student loans, from four advocacy groups that represent servicemembers, and from one trade group representing servicers. In addition, we reviewed relevant federal laws and regulations. We used the above methodologies and a review of federal internal control standards and standards for assessing financial regulatory systems, as well as Education’s strategic objectives to determine whether the information provided to servicemembers about the SCRA rate cap is accurate, whether the challenges faced by servicemembers are different for those with private student loans, whether Education tracks SCRA- specific complaints, and whether there are any gaps in the oversight of the SCRA interest rate cap for student loans. We conducted this audit from May 2015 through November 2016, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To describe how many servicemembers received the SCRA interest rate cap on their student loans, we analyzed record-level data from the 10 federal student loan servicers—the companies contracted to service loans owned by Education—on borrowers with federal loans or commercial Federal Family Education Loans (FFEL) for the period October 2008 (the beginning of the first full fiscal year that SCRA was applied to student loans owned by Education) through December 2015 (the most recent available data). The 10 federal loan servicers as of December 31, 2015, were: Cornerstone, EdFinancial, Granite State, Great Lakes, Missouri Higher Education Loan Authority, Navient, Nelnet, Oklahoma Student Loan Authority Servicing, Pennsylvania Higher Education Assistance Agency/FedLoan, and the Vermont Student Assistance Corporation Federal Loans. Specifically, for borrowers who received the cap on at least one of their loans, we analyzed data provided by the 10 loan servicers to determine (1) whether the cap was applied on the basis of a written notification of eligibility from the servicemember or on the automatic eligibility check; (2) the type of loan (i.e., federal or commercial FFEL); (3) the duration of the cap; (4) loan interest rates prior to borrowers receiving the cap; and (5) loan disbursement amounts. We also used these analyses to examine how the SCRA interest rate cap may affect total loan costs for servicemembers. We also reviewed the interest rates for federal, commercial FFEL, and private student loans from 2008 to 2016, to make sure that interest rates were over 6 percent for each type of loan during at least some of the period of time covered by our review (see tables 3 and 4). We reviewed aggregate data from Education’s National Student Loan Data System on the number of borrowers with federal loans who were receiving the SCRA cap as of October 1, 2014, October 1, 2015, and December 31, 2015, and compared these data to the data we received from the 10 federal loan servicers. In addition, we reviewed aggregate data on the total number of federal and commercial FFEL loans open each year, from fiscal year 2009 through the first quarter of fiscal year 2016, and the outstanding balance of commercial FFEL loans as of December 31, 2015 to describe the size of the commercial FFEL loan market. We also reviewed information provided by Education on loan transfer rates between servicers to determine the extent to which loan transfers occur. Because some loans were transferred between servicers during our review’s time frame, our numbers may include duplicate counts for borrowers who had the cap applied by more than one servicer. According to data from Education’s National Student Loan Data System, 2 percent of federal loans were transferred in fiscal year 2009 and 5 percent were transferred in fiscal year 2015. In addition, 12 percent of commercial FFEL loans were transferred between servicers in fiscal year 2009 and 3 percent were transferred in fiscal year 2015. With regard to private student loans, we reviewed aggregate data from 6 private student lenders, who together accounted for 65 percent of outstanding private student loan debt as of March 31, 2016, via MeasureOne, a private-sector company representing the lenders. The 6 lenders are: Citizens Bank, N.A.; Discover Bank; Navient; PNC Bank, N.A.; Sallie Mae Bank; and Wells Fargo Bank, N.A. Specifically, we reviewed aggregate data on the number of private student loans, borrowers, and outstanding loan balances serviced by these servicers overall, and for those receiving the SCRA cap, as of September 30, 2014, September 30, 2015, December 31, 2015, and March 31, 2016 (the most recent available data). Because 2014 was when Education required that federal loan servicers use the automatic eligibility checks for federal student loans on a monthly basis, we collected data from the end of fiscal years 2014 and 2015, and for each quarter after the end of 2015, from the private loan servicers to see if the number of borrowers receiving the cap for private loans was also increasing over this period of time. We also asked these servicers whether they implemented the automatic eligibility check for the private loans they service over this time frame, and if so, when they first began using it and how often they use it to identify eligible servicemembers. We determined that data from each of these sources were sufficiently reliable for the purposes of this report by electronically testing the data for missing data, outliers, and errors; by reviewing existing information about the data and the systems that produced them; and by interviewing knowledgeable loan servicer and Education officials, as appropriate. In the case of the private student loan data, we sent data reliability questions for the 6 lenders to MeasureOne, who forwarded them to the lenders. MeasureOne then collected the lenders’ written responses and forwarded them to us. To learn how the 10 federal student loan servicers identify and work with eligible servicemembers to apply the interest rate cap and about any related challenges, we conducted semi-structured interviews between October 2015 and January 2016 with representatives of the federal student loan servicers who together serviced all federal student loans as of December 31, 2015. We also interviewed an official with the National Council of Higher Education Resources—one of four trade associations whose membership includes student loan servicing companies—about their attempt in 2011 to obtain permission from Education to use the SCRA website to obtain active-duty dates. This group also participates in the Student Loan Ombudsman’s Caucus. To identify challenges servicemembers face obtaining the cap, we spoke with officials from DOD, Education, and CFPB, and with representatives of four advocacy groups that were selected to provide representation of both currently active servicemembers and veterans, and based on suggestions from agency officials and experts in this area: American Legion, Military Officers Association of America, Student Veterans of America, and Military.com. We also interviewed Education, CFPB, and DOJ officials to clarify the role of each agency with respect to overseeing implementation of the SCRA interest rate cap for student loans. To determine how the federal loan servicers are supposed to be applying the cap for eligible borrowers and how Education, CFPB, and DOJ oversee implementation of the SCRA interest rate cap, we reviewed relevant federal laws and regulations, and policies, procedures, and guidance for the cap, as well as Education’s contracts with and monitoring plans for the federal student loan servicers. To learn about the challenges servicemembers face in obtaining the SCRA cap, we reviewed complaints submitted to CFPB by servicemembers and reports from CFPB that discuss such challenges. To identify possible challenges servicemembers encounter concerning the cap, we also reviewed relevant agency publications and websites, including materials used by DOD and Education to inform servicemembers about the cap. Finally, we reviewed documentation, such as memoranda of understanding, related to interagency coordination concerning the cap, including the Joint Statement of Principles on Student Loan Servicing issued jointly by Education, CFPB, and the Department of the Treasury in September 2015. Total 275 1,042 1,771 2,821 3,644 4,466 5,383 6,527 7,447 8,349 9,469 10,518 11,320 12,029 12,672 13,497 14,483 15,203 16,107 17,760 18,107 18,108 18,126 18,317 18,005 17,975 18,103 18,271 18,319 17,562 6.8 $12,000 180,939 (47%) 240,582 620 (1%) In addition to the contact mentioned above, the following staff members made significant contributions to this report: Meeta Engle (Assistant Director), Jennifer McDonald (Analyst-in-Charge), Jeffrey G. Miller, and Jill Yost. Additional assistance was provided by Susan Aschoff, James Bennett, Mark Bird, Deborah Bland, Jessica Botsford, Brian Egger, Lawrence Evans, Brenda Farrell, Cody Goebel, Cynthia Grant, Kirsten Lauber, Sheila McCoy, John Mingus, Barbara Steel-Lowney, Michelle St. Pierre, Nicholas L. Weeks, Adam Wendel, and Rebecca Woiwode. Federal Student Loans: Education Could Improve Direct Loan Program Customer Service and Oversight. GAO-16-523. Washington, D.C.: May 16, 2016. Financial Regulation: Complex and Fragmented Structure Could Be Streamlined to Improve Effectiveness. GAO-16-175. Washington, D.C.: February 25, 2016. Federal Student Loans: Key Weaknesses Limit Education’s Management of Contractors.GAO-16-196T. Washington, D.C.: November 18, 2015. Federal Student Loans: Education Could Do More to Help Ensure Borrowers Are Aware of Repayment and Forgiveness Options. GAO-15-663. Washington, D.C.: August 25, 2015. Higher Education: Better Management of Federal Grant and Loan Forgiveness Programs for Teachers Needed to Improve Participant Outcomes. GAO-15-314. Washington, D.C.: February 24, 2015. Standards for Internal Control in the Federal Government. GAO/AIMD-00-21.3.1. Washington, D.C.: November 1999. Federal Student Loans: Better Oversight Could Improve Defaulted Loan Rehabilitation. GAO-14-256. Washington, D.C.: March 6, 2014. Servicemembers Civil Relief Act: Information on Mortgage Protections and Related Education Efforts. GAO-14-221. Washington, D.C.: January 28, 2014. Mortgage Foreclosures: Regulatory Oversight of Compliance with Servicemembers Civil Relief Act Has Been Limited. GAO-12-700. Washington, D.C.: July 17, 2012. Financial Regulation: A Framework for Crafting and Assessing Proposals to Modernize the Outdated U.S. Financial Regulatory System. GAO-09-216.Washington, D.C.: January 8, 2009. | SCRA helps servicemembers financially by capping interest rates on student loans during active duty. As of May 2016, about 1.3 million servicemembers were on active duty. The number of active duty servicemembers with student loans is unknown, as is the number eligible for the rate cap who may not have received it. GAO was asked to review implementation of the rate cap for servicemembers' student loans. This report examines: (1) the number of servicemembers who received the cap for student loans (2) challenges that servicemembers face in doing so, and (3) the extent to which federal agencies oversee implementation of the cap. GAO analyzed data from 2008 through 2015 from the 10 federal student loan servicers; reviewed relevant federal laws, regulations, policies, and training materials; and interviewed representatives of the servicers and servicemember advocacy groups, and officials from DOD, Education, the CFPB and DOJ. The Servicemembers Civil Relief Act (SCRA) provides servicemembers with an interest rate cap of 6 percent on student loans while they are on active duty. The number of servicemembers with federal student loans who received this rate cap increased as a result of the Department of Education (Education) requiring federal loan servicers to regularly use the Department of Defense's (DOD) SCRA website to identify eligible servicemembers and automatically apply the rate cap without requiring servicemembers to provide written notice of active duty (see figure). Using the automated process, some federal loan servicers identified borrowers who had been eligible for the rate cap as far back as 2008, when the SCRA rate cap first applied to federal loans, and retroactively applied the cap. Servicemembers can face challenges obtaining the SCRA rate cap due to their failure to receive accurate information. Federal internal control standards state that agencies should externally communicate the information necessary to achieve their objectives. However, some servicemembers eligible for the cap may not receive it because information used by DOD to inform them about the cap is inaccurate: for example, some DOD information states that the rate cap does not apply to student loans. In addition, because the automated process to identify eligible servicemembers is not required for private student loans, servicemembers with private loans may be particularly at risk of not receiving the accurate information needed to obtain the cap themselves. While Education monitors the application of the SCRA cap for federally owned or guaranteed student loans, there is a gap in oversight for private student loans. The Consumer Financial Protection Bureau (CFPB), four federal financial regulators of banks, and the Department of Justice (DOJ) each oversees aspects of private student loans or SCRA, but none has the authority to routinely oversee SCRA compliance at nonbank entities that handle private student loans. These nonbank entities include institutions of higher education and private companies. The resulting gap in oversight of SCRA compliance for these nonbank entities that make or service private student loans increases the risk that servicemembers will not receive a benefit for which they are eligible. GAO is making four recommendations, including that DOD improve the accuracy of SCRA information on student loans, and that CFPB and DOJ collaborate to ensure routine oversight of nonbank lenders and servicers, and seek additional authority, if needed. DOD disagreed and said it already provides accurate information. DOJ agreed and CFPB did not specifically agree, but said that all eligible servicemembers should receive the cap. GAO maintains that DOD's outreach materials are not always accurate and that routine oversight is necessary for nonbank lenders and servicers. |
The federal government has provided health insurance benefits to its employees through FEHBP since 1960. The Congress established FEHBP primarily to help the government compete with private-sector employers in attracting and retaining talented and qualified workers. All active and retired federal workers and their dependents are eligible to enroll in FEHBP plans, and about 86 percent of eligible workers and retirees participate in the program. As of July 2002, FEHBP provided health insurance coverage to about 8.3 million individuals, including 2.2 million active workers, 1.9 million retirees, and an estimated 4.2 million of their dependents. The government pays a portion of each enrollee’s health insurance benefit premium cost. Currently, as set by statute, the government pays 72 percent of the weighted average premium of all health benefit plans participating in FEHBP, but no more than 75 percent of any plan’s premium. The premiums are intended to cover enrollees’ health care costs, plans’ expenses, reserves, and OPM’s administrative costs. Total FEHBP health insurance premiums paid by the government and enrollees were about $22 billion in 2001. The legislative history of the FEHBP statute indicates that the Congress wanted enrollees to exercise choice among various plan types and, by using their own judgment, select health plans that best meet their specific needs. The FEHBP statute authorizes OPM to contract with FFS plans which include the Blue Cross and Blue Shield (BCBS) service benefit plan and plans sponsored by federal employee and postal organizations, such as those for the Foreign Service and rural letter carriers and comprehensive medical plans (commonly known as HMOs), thereby providing choice to enrollees. Some plans offer two levels of benefits, which provide enrollees with more options, and some plans also offer a point-of-service (POS) option that provides an enrollee a choice of using the plan’s health care providers or, by paying a higher fee, selecting providers outside of the plan’s provider network. By statute, OPM is responsible for negotiating contracts with the FFS plans and HMOs each year. Under this authority, OPM can negotiate these contracts without regard to competitive bidding requirements. Those plans meeting the minimum requirements specified in the statute and regulations may participate in the program and their contracts may be automatically renewed each year. However, plans can choose to terminate their contracts with OPM at the end of the contract period, and under certain circumstances OPM has the authority to terminate contracts. As part of its contracting responsibility, OPM negotiates benefits and premiums with each plan. In April of each year, OPM sends a letter to all approved and participating FFS plans and HMOs—its annual “call letter”— to solicit proposed benefit and premium changes for the next year, which are due by the end of May. The statute does not define a specific benefit package that must be offered but indicates the core health care services that plans must cover. Each plan therefore proposes its own benefit package in response to the call letter. In addition, the plans propose the premiums for these benefits, which must be provided for two levels of coverage—self-only and self and family. As a result, each plan’s benefit package and premiums can differ. OPM attempts to complete its negotiations by August so that brochures describing the plans’ benefits and premiums can be ready for the FEHBP open season that begins in November and lasts about a month. FEHBP’s brochures, which OPM approves each year, facilitate enrollee plan comparisons and selections. During each open season, federal workers and retirees are free to switch to other plans for the next calendar year, regardless of any preexisting health conditions. Thus, enrollees can determine which plans best meet their needs. OPM data show that in 2000 and 2001 less than 5 percent of enrollees switched plans. Thirteen FFS plans participated in FEHBP in 2002. Overall, about 70 percent of federal employees and retirees who participate in FEHBP were enrolled in FFS plans. Enrollees in these plans can choose their own physicians and hospitals and the plan reimburses the provider or the enrollee for the cost of each covered service provided up to a stated limit. In addition, 11 of the 13 FFS plans had preferred provider organization (PPO) networks, and by using providers in these networks, enrollees can spend less in cost-sharing requirements compared to non-PPO providers. The FEHBP statute establishes the rate-setting process for FFS plan premiums. FFS plans are experience rated—that is, the premiums are to be updated each year based on past claims experience and benefit adjustments. As a result, premiums are designed to cover the cost of all claims filed for enrollees as well as plan profit and administrative costs and, therefore, will differ for each FFS plan. In 2002, all active federal workers and retirees could enroll in the BCBS service benefit plan and in six of the FFS employee organization plans. (See table 1.) The remaining six FFS organization plans were available only to members of the sponsoring organizations. In 2002, 170 HMOs, located in local markets throughout the country, participated in FEHBP and accounted for about 30 percent of FEHBP enrollees. HMO enrollees must generally use a plan’s provider network to obtain services. OPM has established the rate-setting process for HMOs participating in FEHBP in regulations. For most HMOs, OPM bases the FEHBP premium rate on the rates paid to the HMO by the two other employer-sponsored groups with the most similarly sized enrollments in that community. This ensures that FEHBP obtains a rate that is at least comparable to the lower of the rates paid by two other similarly sized groups, with adjustments to account for differences in the demographic characteristics of FEHBP enrollees and the benefits provided. The number of HMOs available to federal workers and retirees depends on the area where they live or work. In 2002, 11 states had no HMOs participating in FEHBP and, in the other states and the District of Columbia, the median number of HMOs available to federal enrollees was two. Some local markets had higher HMO participation. For example, the Washington, D.C., area and southern California had at least four HMOs in which federal workers and retirees could enroll in 2002. A few plans accounted for the largest share of FEHBP enrollment. The largest plan—the BCBS service benefit plan—had about half of the 2002 enrollment. The three largest plans, including BCBS, were all FFS plans and accounted for almost two-thirds of FEHBP enrollment. About two- thirds of the 183 participating FFS plans and HMOs enrolled fewer than 5,000 active federal workers and retirees, and slightly less than a third of all plans enrolled fewer than 1,000 in 2002. The three other large purchasers we reviewed varied in the extent to which they provide coverage through HMOs, FFS plans, and PPOs as well as in the number of plans they offer. GM, the largest private-sector purchaser of employer-sponsored health insurance, purchased coverage for about 1.2 million workers, retirees, and their dependents through 81 FFS plans, 31 PPOs, and 136 HMOs in 2002. About 71 percent of the unionized employees and retirees and about 63 percent of the salaried employees and retirees were enrolled in FFS plans and PPOs. CalPERS purchased coverage in 2002 for about 1.2 million active and retired state and local government public employees and their family members who obtained coverage through nearly 1,100 local government agencies, including schools, and the state of California. About 74 percent of CalPERS enrollees were in 7 HMOs, with the remainder in 2 PPOs and 3 plans covering members of such associations as the association of highway patrolmen in 2002. PBGH, a California employer coalition, purchased HMO coverage through its Negotiating Alliance for 19 large employers. About 350,000 workers, retirees, and dependents were in PBGH’s 7 HMOs in 2002. This represented about 70 percent of participants in these employers’ plans. Participating employers made their own arrangements for non-HMO coverage, primarily through PPOs, for the remaining employees. From 1991 through 2002, health insurance premiums for FEHBP increased on average 5.9 percent a year compared to 6.4 percent for large employers—those in the Kaiser/HRET survey with 5,000 or more employees—and 5.8 percent for CalPERS. (See fig. 1.) FEHBP average premium increases have exceeded 10 percent beginning in 2001, but higher premium increases were partially offset by some plans reducing benefits— mostly increased enrollee cost sharing—and some enrollees switching to plans with lower premiums. Generally, FEHBP premiums increased at a lower rate than premiums for other large employers and CalPERS during the first half of the last decade, but increased faster during the second half. For example, cumulatively from 1991 to 1996, premiums increased on average about twice as fast for large employers (6.1 percent per year) than for FEHBP (3.2 percent per year). Premiums for CalPERS also increased faster (5.1 percent per year) on average during this period than for FEHBP. During the mid-1990s, the rate of change in premiums was negative for both FEHBP and CalPERS and as a result average premiums declined temporarily. FEHBP premiums declined on average by about 4 percent in 1995, while CalPERS premiums declined on average from 0.8 to 4 percent per year from 1995 to 1997. Cumulatively from 1997 to 2002, FEHBP average premiums grew about 2 percentage points per year faster than those of CalPERS and large employers—8.6 percent per year compared to 6.5 and 6.7 percent per year, respectively. Much of the difference in premium increases between FEHBP and other major purchasers during this period occurred in 1998 and 1999. OPM attributes much of FEHBP’s premium growth in these years to changes made to the reserve balances maintained by FEHBP plans. FEHBP’s average premium increase of 13.3 percent in 2002 was similar to increases for other large purchasers, but about 4 percentage points higher than the CalPERS increase. OPM announced in September 2002 that average premiums would increase by 11.1 percent in 2003 for all FEHBP plans. Premiums for FEHBP’s FFS plans were expected to increase on average by 10.5 percent, while HMO premiums were expected to rise an average of 13.6 percent. This represents the third straight year of double-digit premium increases for FEHBP, but this increase was less than FEHBP’s average increase in 2002, and less than those many other employers anticipate. While 2003 premiums for many large employers were still being negotiated at the time of our work, two employee benefit consulting firms reported preliminary findings from surveys of employee health benefits managers that anticipated overall premium increases of from 13 to 15 percent, and average HMO premium increases of 16 percent, for 2003. CalPERS in particular is facing a significant premium increase in 2003. Premiums for CalPERS’ HMOs—which enroll the bulk of its participants—were expected to increase an average of 26 percent in 2003. Premiums for CalPERS’ two PPOs were expected to increase about 19 and 22 percent. FEHBP’s premium increases in recent years would have been higher but for increased cost-sharing requirements for employees and retirees as well as shifts in enrollment to plans with lower premiums. Over the last 6 years, FEHBP plans have been required to cover certain new benefits, but plans have also had some offsetting benefit reductions—mostly increased enrollee cost sharing—thereby resulting in a net benefit reduction. Like many FEHBP and other large employers’ health plans, from 2000 through 2002, three large FFS plans increased or introduced cost-sharing features such as copayments or coinsurance for prescription drugs and physicians as well as deductibles for other services, as the following examples illustrate. BCBS raised its standard option employee copayment for PPO home and physician visits from $12 to $15, and raised its annual deductible from $200 to $250 per individual and from $400 to $500 for families. BCBS also introduced cost sharing for mail-order prescription drugs for Medicare beneficiaries, which the plan had previously waived. The Government Employees Hospital Association, Inc. (GEHA) raised the copayment for a physician office visit from $10 to $15, and raised employee coinsurance for non-PPO providers from 20 percent to 25 percent. In addition, GEHA raised its annual deductible from $250 to $300 per individual and from $500 to $600 for families, and increased the maximum annual out-of-pocket limit from $4,500 to $5,500. Mail Handlers raised the standard option deductible from $200 to $250 per individual, and from $600 to $750 for families. Enrollees who have shifted to plans with lower premiums have also reduced FEHBP’s average premium increases. Specifically, OPM’s actuarial estimates indicate that FEHBP enrollees who switch to plans offering lower premiums have reduced average premium increases about 1 percent per year since 1997. For 2003, OPM anticipated that this phenomenon would offset the overall premium increase by about 1.2 percent from what it otherwise would have been. Our analysis shows that, from 1999 to 2002, more than two-thirds of plans with premium increases lower than the median FEHBP premium increase gained enrollment. FEHBP premium increases are related to prior years’ increased claims expenditures, which for the three largest FEHBP plans from 1998 to 2000 were in large part driven by increasing expenditures for prescription drugs and hospital outpatient care. Increasing plan payments per drug dispensed accounted for most of the increase in expenditures for drugs, while increasing utilization accounted for the increase in hospital outpatient care expenditures. Our analysis of 1998 to 2000 claims data for FEHBP’s three largest plans— all FFS plans—indicate that per-enrollee claims expenditures increased by about 12.6 percent, including increases of about 8.6 percent from 1998 to 1999, and about 3.7 percent from 1999 to 2000. We specifically examined claims expenditures for these three plans because HMOs typically do not track or report claims data to OPM and the three plans we reviewed represented about 90 percent of FFS enrollees and about two-thirds of total FEHBP enrollees. Claims expenditures for prescription drugs and hospital outpatient care accounted for more than 70 percent of the overall increase in per-enrollee claims expenditures for these plans from 1998 through 2000, while hospital inpatient care and physician visits accounted for most of the remainder. Increases in claims for prescription drugs accounted for the largest share (47 percent) of the overall increase in claims expenditures from 1998 to 2000 and increased at the fastest rate during this period—by nearly one-fourth. (See table 2.) The increase in per-enrollee claims expenditures for each of these services represents changes in plan payments per service and utilization for these categories. Specifically, figure 2 shows that increasing plan payments per service played the larger role in changing claims expenditures for prescription drugs, hospital inpatient care, and physician visits— 66 percent of the $235 increase in expenditures for prescription drugs, 76 percent of the $57 increase for hospital inpatient care, and 93 percent of the $45 increase for physician visits. Utilization increases accounted for all of the increase in expenditures for hospital outpatient care and the remainder of the increases for prescription drugs, hospital inpatient care, and physician visits. Aging FEHBP enrollees and the changing health care market may have contributed to increasing plan payments and utilization. Increased utilization was in part associated with FEHBP’s aging enrollee population. OPM actuaries estimate that a 1-year increase in the average age of the FEHBP population translates into almost a 3.3 percent increase in total health costs. From 1998 through 2000, the average age of FEHBP enrollees increased by about half a year, from 61.6 years to 62.1 years. Recently, higher payments have also resulted from providers’ negotiations with managed care plans. In the early and mid-1990s, managed care plans were able to extract significant discounts from providers that they included in their networks. However, in recent years studies have indicated that providers have secured higher payments in part due to consolidations— particularly among hospitals in some major metropolitan areas—that may have increased their market power. In addition, there is some evidence in these studies that physicians are demanding and receiving higher fees. Consistent with the design of FEHBP, which encourages enrollee choice, OPM relies on competition among plans and its annual negotiations with participating plans to moderate FEHBP plans’ premium increases. To maximize enrollees’ choices among plans, OPM contracts with all plans meeting minimum standards and allows plans to propose varying benefit designs. In its annual negotiations with the plans, OPM suggests various cost containment strategies for plans to consider as they prepare their benefit and premium proposals, and for 2003 placed more emphasis on encouraging the plans to propose approaches to control cost increases. Other major purchasers, such as CalPERS, PBGH, and GM, adopt different approaches in developing their health benefit offerings such as negotiating based on a standardized benefit package and contracting only with plans with which they reach a satisfactory agreement. As large purchasers face escalating premiums, they continue to look for new ways to help control costs, including offering plans that make enrollees more sensitive to the costs of health care by giving them more control over their health care spending, charging enrollees more when they go to higher cost hospitals, or focusing more attention on managing chronic health care conditions. OPM contracts with all plans meeting certain standards and requirements. As long as plans continue to meet the minimum standards, OPM does not exclude them from the program. Although the statute gives OPM the authority to remove plans from FEHBP under certain circumstances, OPM officials said that they have not recently exercised this authority primarily because they wanted to maximize enrollee choice and minimize enrollee disruption, especially in less populated areas of the country. While FFS plans and HMOs do not have to compete against one another to participate in FEHBP, they do have to compete with other plans to attract enrollees. One way plans compete is by the benefits they offer. Since the FEHBP statute does not define a specific benefit package, but rather requires plans to offer a core set of benefits, plans propose the benefits they will offer to remain competitive within their own market areas, whether national or local. Each year, OPM negotiates each plan’s benefits package, ensuring that the costs for any new benefits proposed by the plan are offset by reductions in other benefits. Plans also compete for enrollees based on their premiums. By statute, premiums must “reasonably and equitably” reflect the cost of the benefits provided by the different plan types participating in FEHBP. Premiums for FFS plans are experience rated. Over time, their premiums approximately equal average service expenditures, administrative costs, and profits. If OPM and the plans set premiums too high or too low in one year, OPM makes appropriate adjustments to premiums and reserve balances in subsequent years. To set FEHBP premium rates for the HMOs, OPM relies on the negotiations that these plans conduct with two similarly sized purchasers in each market, requiring FEHBP to receive the lower of the two rates. OPM’s Office of the Inspector General conducts periodic audits to assure the validity of these rates. The government’s method for setting premium contributions provides plans an incentive to price their products competitively since enrollees pay less for lower cost plans and pay the entire cost exceeding the maximum government share. For example, for a plan with a self-only premium of $3,200 per year, the enrollee would pay $800 and the government would pay the other 75 percent ($2,400). For a plan costing $3,400, the enrollee would pay $856 while the government would pay the maximum $2,544. For any plan costing more, the enrollee would have to pay the entire additional cost—a plan costing $3,600, for example, would require a $1,056 annual premium from the enrollee while the government share would remain at $2,544. Few plans have premiums much higher than the amount where the enrollee would receive the maximum government share: Only 19 of the 183 plans in 2002 had premiums more than 10 percent above $3,392 (the premium equivalent to the maximum government share of $2,544), while 97 had premiums at least 10 percent below this amount. Each year, OPM’s “call letter” provides its negotiation objectives and calls for the plans’ new benefit and premium proposals. OPM uses its annual letter to give guidance regarding the goals to be achieved and the types of cost containment efforts plans may want to consider to help contain premium increases. OPM encourages plans to consider implementing cost containment strategies each year as they draft their FEHBP benefit and premium proposals. During negotiations over benefits and premiums, OPM tends to focus its cost containment efforts on plans that submit proposals with the highest premium increases or those that are outliers in some other way. To some degree, OPM relies on the competitive nature of the program to achieve results in that each plan must weigh the potential effect of its benefit offerings and premiums on its market share. Changes in benefits, and any resulting premium changes, can affect a plan’s enrollment, but there is a trade-off since increased benefits may be attractive to potential enrollees while the associated increased premium may deter enrollment. OPM has encouraged plans to consider several strategies to help moderate premium increases. For example, for contract year 1998, OPM encouraged FFS plans to expand and strengthen their existing PPO arrangements by obtaining discounts when cost effective. For that year, it also encouraged all plans to consider proposing a point-of-service (POS) product. OPM’s call letter stated that POS products were an effective way to introduce enrollees to the concept of managed health care. For contract years 2001 and 2002, OPM’s call letters encouraged ways to control rising prescription drug costs including use of drug formularies and three-tier drug benefits— that is, lower cost sharing for generic and brand name drugs on a plan’s formulary than for drugs not included on the formulary. Even more than in past years, OPM’s latest call letter for contract year 2003 challenged plans to identify ways to reduce premium increases. OPM asked plans to propose innovative ideas to help contain these increases. For 2003, OPM also encouraged plans to consider several specific cost containment strategies including increasing enrollees’ out-of-pocket costs, reemphasizing the need to manage prescription drug costs, and putting more emphasis on care management for enrollees who have chronic conditions. In addition, the call letter told plans to expect very tough negotiations, a specific direction OPM did not include in past letters. On September 17, 2002, OPM announced that FEHBP premiums would increase by an average of about 11.1 percent for 2003, about 2 percentage points less than in 2002. In addition, OPM officials indicated that, while some individual plans increased or decreased benefits, overall benefit levels would be largely similar to those available in 2002. OPM officials reported that the initial proposals submitted by the plans would have resulted in a 13.4 percent increase for 2003. Following negotiations with OPM on benefits and premiums, the average increase was reduced to 12.4 percent. OPM officials anticipated that the remaining savings from the initial proposals would result from FEHBP enrollees switching to lower cost plans during the open enrollment season. Whereas OPM contracts with all plans meeting minimum standards and negotiates benefit packages that can vary with each plan, other large purchasers we reviewed follow a different approach. CalPERS, GM, and PBGH conduct negotiations based on a standardized benefit package. At the end of the negotiations, these purchasers can decide not to contract with a plan that does not meet their standards in such areas as cost or quality. Some of these purchasers also reward enrollees by paying more of the premiums when enrollees choose plans the purchasers consider to be the best value. Continuing premium increases have caused these and many other large purchasers to search for ways to reduce their premium costs. While many purchasers first look to shift more of the costs to their employees by taking such actions as increasing plan deductibles, some are also exploring new strategies to help contain these increases. The three large purchasers we reviewed rely on a standardized benefits package when conducting negotiations, particularly in negotiations with HMOs. CalPERS standardized benefits and copayments across its HMOs in 1993 to be able to better assess differences in plans’ costs, and GM also negotiates with HMOs using a standardized benefits package. PBGH, in conjunction with other national purchasers, developed an annual request for proposals that it uses for its standardized HMO benefit package. Along with using standardized benefit packages, some large purchasers exclude plans if they cannot negotiate a satisfactory agreement with them. During its negotiations for benefit year 2002, for example, CalPERS rejected bids from all participating HMOs as too high and then allowed them to resubmit revised bids. CalPERS rejected the bids because the proposed increases were twice as high as those that occurred in the past 5 years and were considerably higher than what CalPERS had expected. CalPERS ultimately dropped 3 of its 10 HMOs at the end of its negotiations that year. For benefit year 2003, CalPERS dropped 2 of the remaining 7 HMOs at the end of its negotiations to help control premium increases and to provide the best value for those premiums. GM reviews and scores HMOs on the basis on quality and cost. Plans scoring relatively low will either be dropped or be given a year to improve. Like FEHBP, some other large purchasers vary the premiums some employees pay to encourage enrollment in certain plans. For example, as part of its value purchasing strategy, which the company started in 1997, GM evaluates HMOs for quality and value and encourages salaried employees to enroll in those plans it rates as higher value plans. For salaried employees, GM covers a larger share of the premiums for HMOs designated as higher value. GM estimates that it saves about $4.6 million annually by having its salaried employees move into HMOs designated as higher value and that these employees save about $2 million in premiums. Also, PBGH states that it focuses its purchasing efforts on plans it has identified as high quality and some employers participating in the group support PBGH’s effort by setting their premium contributions to encourage employee enrollment in plans considered to be high value. Over the next several years, analysts predict that double-digit health insurance premium increases will continue. As a result, many large purchasers are searching for ways to slow this growth. Shifting more of the costs to employees is one of the first cost containment strategies employers consider as premium rates escalate. In particular, many of the largest employers have increased deductibles for PPO plans. For example, employer survey data show that the average annual deductible for self- only in-network PPO coverage increased from $175 in 1999 to $310 in 2002, while out-of-network deductibles increased from $272 in 1999 to $529 in 2002. Similarly, very large employers are increasingly using multiple-tier cost sharing for prescription drugs as a cost containment strategy. According to another employer survey, 22 percent of PPOs had a three-tier drug copayment in 2000, but the number increased to 40 percent in 2001. Some large purchasers, including OPM and those we reviewed, are beginning to explore new strategies to help reduce escalating costs. For example, some are in the early stages of considering “consumer-driven” plans that provide employees with more financial incentives to be sensitive to health care costs and more control over their health care spending decisions. As this concept covers a wide range of possible approaches, there is no single definition. However, all approaches tend to shift more decision-making responsibility regarding health care from employers to employees. For example, they could provide employees with a personal spending account, which the employer would fund at different levels. One plan funds these accounts at $1,000 for an individual or at $2,000 for a family. Employees could use this money to pay medical expenses. If employees spend all the money in their accounts, they would have to spend their own money until a deductible amount—which for one plan was $600 for an individual employee and $1,200 for a family—is met. Then, coverage through an insurance policy purchased by the employer would begin. In some approaches, employees who do not spend all the money in their accounts could carry the money over from year to year. To date, as these plans are so new, few people are enrolled—several studies have estimated that fewer than 1 percent of enrollees with employer- sponsored health insurance are in some form of consumer-driven health plans. Other new strategies that some purchasers are considering include plans that contain provisions to help reduce hospital costs and costs for enrollees with chronic conditions. For example, CalPERS and PBGH are exploring the use of financial incentives for enrollees when choosing from which hospital to receive care. Such plans are now becoming available but represent a very small share of the market. These plans offer tiered copayments for enrollees that are lower for hospitals that offer the best rates and are higher for those that are more expensive. Another approach attracting attention among many large employers is disease management, which focuses attention on chronic illnesses such as asthma, diabetes, and heart disease that generate a large amount of health care expenditures. For example, CalPERS, PBGH, and GM are all actively involved in pursuing disease management programs. Also, in its call letter for contract year 2003, OPM encouraged FEHBP plans to consider using disease management programs. However, according to one employer survey, many purchasers said that disease management programs are too new and data are not yet available to assess the benefits compared to the costs. We provided a draft of this report to OPM, CalPERS, GM, and PBGH for their review. OPM generally concurred with our study findings, highlighting its negotiating strategy as contributing to average FEHBP premiums for 2003 being below national trends. OPM also indicated that in the coming year it will strengthen its efforts by adding enhanced consumer education to provide enrollees with additional information for making informed choices. CalPERS and GM also concurred with our findings. PBGH, along with OPM and CalPERS, provided technical comments, which we incorporated as appropriate. (App. II contains the full text of OPM’s comments.) As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. We will then send copies to the Director of OPM, other interested parties, and appropriate congressional committees. We will also make copies available to others on request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please call me at (202) 512-7118 or John Dicken at (202) 512-7043 if you have any additional questions. N. Rotimi Adebonojo and Joseph Petko were major contributors to this report. To compare premium trends for the Federal Employees Health Benefits Program (FEHBP) and other large purchasers over the last decade, we obtained data from the Office of Personnel Management (OPM), the California Public Employees’ Retirement System (CalPERS), and surveys of private employer-sponsored health benefits conducted by the Kaiser Family Foundation and the Health Research and Educational Trust (Kaiser/HRET). To identify factors driving FEHBP’s recent premium growth, we analyzed several OPM data sources, including summary reports it received from the three largest nationwide plans on enrollees’ health care service utilization and related plan payments for 1998 through 2000. These three plans are all fee-for-service (FFS) plans and accounted for 90 percent of FEHBP enrollment in FFS plans and almost two-thirds of the total FEHBP enrollment. We analyzed expenditure and utilization data for services, including hospital inpatient care, hospital outpatient care, physician visits, prescription drugs, laboratory services, surgery, and mental health and substance abuse for 1998 through 2000 for the three largest plans. These summary data are submitted to OPM by each FFS experience-rated plan, reporting utilization and expenditures incurred by the plan in a calendar year and paid in that calendar year and through the first 9 months of the next calendar year. Because each plan reports its data to OPM slightly differently, we aggregated expenditures and utilization for multiple categories of services, including hospital inpatient, hospital outpatient, prescription drugs, and physician visits—and all other services. We adjusted each plan’s expenditures by enrollment as reported by the plans to OPM to calculate per-enrollee expenditure and utilization, and calculated a payment per unit for each category of service. We weighted the expenditure and utilization for the three plans by their respective enrollments for each year from 1998 to 2000. We calculated the increase in per-enrollee claims expenditure attributable to increased plan payments from 1998 through 2000 using the change in plan payments over the 3 years and assuming utilization remained steady at the 1998 level. Similarly, we calculated the increase in per-enrollee claims attributed to increased utilization using the change in utilization from 1998 to 2000 and assuming plan payments were constant at the 2000 level. In addition, using OPM’s data for all FEHBP plans, we compared each plan’s premium and enrollment changes from 1999 through 2002. We could only do this analysis for those plans that participated in FEHBP in each of the comparison years—for example, in both 2001 and 2002. We identified how many plans with premium changes less than and greater than the median premium gained and lost enrollment. These counts do not include plans that dropped out of FEHBP because we do not know what type of premium and enrollment changes these plans would have experienced in the following year. We also reviewed the literature and interviewed OPM officials and actuaries at the Hay Group, Hewitt Associates LLC, and William M. Mercer, Inc. To examine the steps OPM takes to control FEHBP costs, we interviewed officials in OPM’s Office of Insurance Programs and Office of the Actuary. To obtain the plans’ perspectives, we interviewed officials at the Blue Cross Blue Shield Association and at Kaiser Permanente—two large plans participating in FEHBP. We also interviewed representatives from two federal employee unions—the American Federation of Government Employees and the National Treasury Employees Union. To examine how other large purchasers negotiate health benefits and attempt to control costs, we reviewed the literature and employee benefit surveys; interviewed employee benefit consultants; and interviewed officials of three large purchasers of employer-sponsored health insurance, including CalPERS—the largest public purchaser of health insurance after the federal government, Pacific Business Group on Health (PBGH)—a California-based purchaser representing 19 large employers, and General Motors (GM)—the largest private purchaser of employer- sponsored health benefits. See table 3 for selected characteristics of FEHBP and the other large group purchasers. | Federal employees' health insurance premiums have increased at double-digit rates for 3 consecutive years. GAO was asked to examine how the Federal Employees Health Benefits Program's (FEHBP) premium trends compared to those of other large purchasers of employer-sponsored health insurance, factors contributing to FEHBP's premium growth, and steps the Office of Personnel Management (OPM) takes to help contain premium increases compared to those of other large purchasers. GAO compared FEHBP to the California Public Employees' Retirement System (CalPERS), General Motors, and a large private-employer purchasing coalition in California as well as data from employee benefit surveys. FEHBP's premium trends from 1991 to 2002 were generally in line with other large purchasers--increasing on average about 6 percent annually. OPM announced that average FEHBP premiums would increase about 11 percent in 2003, 2 percentage points less than in 2002 and less than some other large purchasers are expecting. FEHBP enrollees would likely have paid even higher premiums in recent years if not for modest benefits reductions and enrollees who shifted to less expensive plans. Increasing premiums are related to the plans' higher claims expenditures. For FEHBP's three largest plans, about 70 percent of increased claims expenditures from 1998 to 2000 was due to prescription drugs and hospital outpatient care. Most of the increase in drug expenditures was due to higher plan payments per drug, while the increase in hospital outpatient care expenditures was due to higher utilization. OPM relies on enrollee choice, competition among plans, and annual negotiations with participating plans to moderate premium increases. Whereas some large purchasers require plans to offer standardized benefit packages and reject bids from plans not offering satisfactory premiums, OPM contracts with all plans willing to meet minimum standards and allows plans to vary benefits, maximizing enrollees' choices. Each year, OPM suggests cost containment strategies for plans to consider and relies on participating plans to propose benefits and premiums that will be competitive with other participating plans. OPM generally concurred with our findings. |
Although little is known about the emerging space support vehicle industry, the U.S. commercial space-launch industry generated $617 million in revenue in 2015 and has experienced significant growth in the past half-decade. FAA reported that its licensed launches have increased 60 percent and industry revenue has increased 471 percent since 2012. We’ve previously reported that the industry has experienced growth in the number and complexity of launches and growth in demand for space launches. Furthermore, the industry is developing new types of reusable launch vehicles, which could reduce launch costs. Components of the U.S. commercial space transportation industry include: Launch Companies: These companies launch satellites or other payloads into space. Their clients include governments, other companies, and individuals. Currently, the industry is launching non- human payloads using rockets and has yet to launch space flight participants. At the same time, some launch companies are developing hybrid launch systems, which contain elements of both aircraft and rocket-powered vehicles. Companies we interviewed plan to use these vehicles—which take off as an aircraft and then launch the spacecraft once reaching a certain altitude—to launch non- human payloads and to transport spaceflight participants to space. For example, WhiteKnightTwo plans to carry Virgin Galactic’s SpaceShipTwo aircraft to an altitude of 50,000 feet where it will air launch SpaceShipTwo and the participants on board into space. According to representatives of launch companies we interviewed, while most of launch companies’ current activities are focused on launches, they may also use the aircraft component of a hybrid launch system for non-launch activities. For example, Virgin Galactic has also considered using WhiteKnightTwo to carry scientific payloads into conditions that would simulate spaceflight. Spaceports: Spaceports are FAA-licensed launch or reentry sites used for commercial space launches and reentries that are developed by private companies and/or states. Spaceports can be co-located with federal sites or commercial or general aviation airports. Spaceports generally include launch pads and runways as well as other infrastructure, such as hangar space, and services, such as emergency services to be used by commercial space companies. As of June 2016, there were 10 FAA-licensed nonfederal launch sites. Space Support Companies: This category includes companies whose business plans focus on training future spaceflight participants. This training simulates conditions encountered in space and can either be accomplished on the ground using pools and centrifuges or in the air using aircraft, hence the interest of companies in the commercial space support vehicle industry. In addition to training, space support companies offer or plan to offer other services, such as carrying scientific payloads for micro gravity experiments and repositioning cargo from one location to another through the air. FAA’s Office of Aviation Safety (AVS) oversees civil aviation activities in the United States. Thus, it regulates aircraft that may be used for space support activities. AVS is responsible for certifying the airworthiness of aircraft, pilots, mechanics, and others whose work affects the safety of those aircraft. When certifying aircraft, AVS inspectors review aircraft engines, propellers, parts, and equipment, including avionics, to provide a reasonable expectation of safety. In addition, AVS is responsible for certifying all operational and maintenance enterprises in domestic civil aviation. In fiscal year 2016, AVS had 7,246 full-time equivalent (FTE) employees and a budget of $1.26 billion. The Office of Commercial Space Transportation (AST) is the office within the FAA responsible for overseeing and coordinating the conduct of commercial launch and reentry operations and issuing and transferring licenses and permits authorizing such activities. Unlike AVS, AST has a dual mandate to (1) protect the public health and safety (people not participating in the launch, i.e., third parties), the safety of property, and national security and foreign policy interests of the United States during commercial launch and reentry activities and (2) encourage, facilitate, and promote U.S. commercial space transportation. The Commercial Space Launch Amendments Act of 2004 instructed the Department of Transportation (DOT) to promote the continuous improvement of the safety of launch vehicles designed to carry humans. Prior to launch, space flight participants must provide written consent to participate. In fiscal year 2016, AST had 92 FTE employees and a budget of $17.8 million. Stakeholders identified spaceflight participant training as one potential use for space support vehicles. FAA regulations require only minimal training for spaceflight participation. Specifically, operators are required to train each space flight participant how to respond to emergency situations, such as smoke, fire, and the loss of cabin pressure. However, some companies are interested in providing training beyond the minimal requirements to potential space flight participants. Stakeholders we interviewed disagreed on the best way to train future spaceflight participants. Some industry stakeholders (13 of 37) told us that future space flight participants will need to receive training in high- performance aircraft. Specifically, 4 of 9 space support companies, 3 of 13 launch companies, 2 of 5 spaceports, and 4 of 10 other stakeholders argued that it is necessary for customers to fully understand what they will experience and the only way to replicate this is through training in high- performance jets. Stakeholders explained that factors that can be simulated in high performance jets but not through other means include the stress of a confined environment and exposure to the physiological and psychological effects of spaceflight. Further, four stakeholders said this training is necessary to become acquainted with g-forces involved with spaceflight and that space support vehicles are best able to provide this training. NASA officials also stated that familiarization with and experiencing high-g environments while performing time critical communication is important preparation for spaceflight participants. See figure 2 for examples of space support vehicles, including high- performance jets (on left) and the modified Boeing 727 (on right). In addition, six stakeholders explained that due to the high cost of spaceflight, getting this experience would be important for someone who is considering space tourism. However, as discussed below this training currently is not available in the U.S. On the other hand, 13 stakeholders report training can be accomplished through currently allowed means, including standard certified aircraft and ground based training. Five of 9 space support companies, 5 of 13 launch companies, and 3 of 10 other stakeholders reported that it is critical to expose future spaceflight participants to the conditions they will encounter in space, but these conditions can be replicated through means other than high-performance jets. Two space support companies said they provide ground-based training through centrifuges, pools, and instruction in space-related topics such as the physiological and psychological effects of space travel. One launch company reported to us that spaceflight participants will only need to know how they will react to microgravity but will not need to know how to accomplish tasks in microgravity. According to some stakeholders, this acclimation to microgravity can be accomplished through parabolic flight, centrifuges, and pools. One company is currently offering a microgravity experience in a Boeing 727 with an interior that has been modified to accommodate passengers to this activity, and another company has considered entering this market with a similar aircraft. Two of 13 launch companies reported they do not anticipate a high training burden for future customers and thus would probably not utilize aircraft training or centrifuges. The remaining 12 of the 37 stakeholders we interviewed did not comment on this issue. Further, stakeholders representing spaceports have proposed that spaceports are the proper place to host spaceflight participant training. Spaceport stakeholders we interviewed said that spaceports provide runways, launch pads, hangars, and other services such as emergency response services for commercial space-transportation companies. As mentioned above, representatives of two of the five spaceports we spoke to thought space flight training should be provided in high-performance jets. Two stakeholders we interviewed at spaceports also expressed interest in hosting companies that provide spaceflight participant training and see it as a potential source of new revenue, either now or when the tourism industry evolves and sends customers into space. One spaceport operator we interviewed sees spaceflight participant training as economically beneficial for the communities surrounding spaceports. In addition to spaceflight participant training, some stakeholders we spoke to identified microgravity research as a potential use for space support vehicles. Microgravity research uses reduced gravity to understand how objects or people will react in reduced gravity environments (such as orbit). Microgravity can be provided through parabolic flights. NASA officials told us microgravity flights are used to test equipment that will be sent into orbit, including for example exercise equipment and 3D printers. According to FAA, one company currently conducts microgravity research using a Boeing 727 with a standard airworthiness certificate. Other companies have proposed using retired military aircraft to fly scientific payloads for researchers. It is difficult to determine the size of the market for the use of space support vehicles for training because we have found no publicly available studies on the size of the spaceflight participant training market, and companies we interviewed told us they have not conducted their own market analysis. However, companies within the industry provided a wide range of estimates of the size of a potential training market. Estimates of the training market reported by stakeholders are often dependent on the size of the overall space tourism market and the training burden launch companies anticipate for their customers. One industry study that found that there are around 8,000 individuals with the money and inclination to take a space tourism flight by 2022. However, it’s not clear how many of these 8,000 individuals would choose to purchase training from training providers, and the number would likely depend on whether launch companies require training for passengers, how intensive the training will be, and whether they will contract for this training or offer it in house. Further, studies of the tourism market that we identified are dated and may not reflect current industry conditions. According to stakeholders we interviewed, there are no studies available on the research market; however, they said that research is a growing segment of the space support market. One stakeholder reported that the research market is the most robust commercial space market that currently exists. It is unclear how many aircraft operators are currently supplying aircraft services for research, but five stakeholders we interviewed expressed interest in using their aircraft to carry scientific payloads for researchers. Based on our interviews, the main customers for this service include universities, the government, and private sector organizations. Two of the companies that we interviewed obtained standard aircraft certification from FAA for aircraft that could be classified as space support vehicles. However, as representatives from one company explained, the certification process was lengthy and expensive. One of the companies received certification to operate parabolic flights using a retrofitted Boeing 727. These flights provide a weightlessness experience that could be used for spaceflight participant training (see fig 3). A company representative told us that the certification process took 18 months and cost millions of dollars. The other company uses a certified aircraft to move its rocket that would be used in a launch from one place to another. While two companies were able to obtain standard aircraft certification for what could be considered space support vehicles, other stakeholders said that the aircraft certification process may not be economically feasible for companies due to the cost of meeting the requirements. For example, representatives of one company said they have considered acquiring a high-performance jet for spaceflight participant training, but that the market for spaceflight participant training would not support the investment needed to purchase the aircraft and go through the current AVS certification process. In addition, FAA’s standard aircraft-certification process is not well suited for the types of aircraft that space support companies would like to use. As mentioned earlier, AVS regulates the safety of aircraft by certifying aircraft to provide a reasonable expectation of safety. According to FAA officials, aircraft manufacturers are typically set up to work with FAA on the certification process, which is an on-going process as the aircraft are designed and built. The process is not designed for single-production aircraft like those launch companies are developing or retired military jets that companies would like to use for spaceflight participant training. Further, if an aircraft with a standard airworthiness certificate is modified or used for another purpose than its original purpose, including for space support, FAA regulations for the standard aircraft certification process require documentation of all modifications that demonstrates that these modifications comply with applicable regulations. AVS allows certain aircraft to fly under an experimental certificate, but companies are prohibited from operating these aircraft for carrying persons or property for compensation and hire—meaning companies cannot receive money for carrying passengers or cargo. Operators can apply for experimental certificates for unique aircraft that have not been approved under the AVS certification process. Experimental certificates can be issued for: showing compliance with regulations; exhibition (such as air shows or movie production); air racing; and conducting market surveys. FAA has provided experimental certificates for some vehicles that could be used for space support services. For example, experimental certificates have been issued for aircraft that are part of a hybrid launch vehicle system for testing and further aircraft development. However, because current regulations do not allow the owners of these experimentally certified vehicles to carry persons or property for compensation, companies are not allowed to use experimentally certified aircraft for space flight participant training or to transport cargo on a hybrid launch system. The restriction on using experimentally certified aircraft to carry persons or property for compensation has limited some companies’ ability to operate in the space-support services market. Three stakeholders we interviewed said they would like to operate space support vehicles, but are having a difficult time securing funding from investors because of market uncertainty and not knowing if they will be allowed to operate them. In addition, some of the companies we interviewed have training operations in other countries because they are not allowed to operate specific aircraft in the United States under current laws and regulations. Further, one company we interviewed has a spaceflight-participant training program that received a safety approval from AST but does not own a vehicle for the training. Company representatives said that they do not want to invest in a space support vehicle until they know if they will be allowed to operate it. Allowing companies to use experimentally certified aircraft for compensation or hire would require a regulatory change. Some of the stakeholders that we interviewed said that a Letter of Deviation Authority (LODA) from an experimental certification may be an option to be able to operate for compensation or hire, but FAA officials we interviewed said that LODAs only apply to pilot training and cannot be used for spaceflight participant training. As described previously, AST issues licenses and permits for commercial space launches. This process includes issuing experimental permits for the development of hybrid vehicles that are connected with a launch activity. When the aircraft component of a hybrid launch vehicle is used in non-launch operations, companies must go through AVS’s certification process to obtain an experimental certificate and operate under aviation regulations. When hybrid launch vehicles are being developed for launch activities, companies operate under commercial space regulations, and FAA may issue an experimental permit or a license. Similar to experimental certificates, companies with experimental permits are prohibited from carrying property or human beings for compensation and hire, according to statute. While companies can perform activities, such as conducting test flights, they may not use hybrid launch vehicles to receive compensation for carrying persons or property. For example, companies would not be able to receive compensation for carrying a researcher. When issuing experimental permits, AST’s process focuses on minimizing risks to ensure the safety of third parties. As discussed below, FAA is developing a report that would help address this issue specifically for hybrid launch vehicles used in non-launch non-reentry operations. Through our discussions with FAA and interviews with stakeholders, we identified potential options for regulating space support vehicles (see table 1). One of these options is to keep in place the current process using standard airworthiness certificates for regulating aircraft that companies would like to use for space support vehicles. Other options would require statutory and/or regulatory changes to allow for the operation of space support vehicles. Each of these regulatory options raises issues to be considered. When asked what changes, if any, should be made to FAA’s current regulatory process to oversee aircraft that might provide support services for the commercial space transportation industry, stakeholders had mixed views. Twenty-five of the 37 stakeholders expressed some opinion about changing the regulatory process. While 11 of these 25 stakeholders did not see a problem with the current approach for space support vehicles, 14 of 25 expressed an interest in a change. Eleven of the 25 stakeholders who expressed an opinion said that the current regulatory process under AVS is the best approach for regulating space support vehicles. These stakeholders prefer the currently regulatory approach for the following reasons: They said that the AVS certification process could best protect participants and third parties from potential safety risks. They said that the technical expertise for ensuring that aircraft are safe for participants is within AVS. Some of the proposed space support activities—such as using retired military jets to provide spaceflight participant training—are not legitimate space activities but are in fact recreational aviation activities and should therefore be regulated along with other aviation activities. Some stakeholders said that the current AVS system is preferable because AST is overburdened, and its staff needs to focus on other commercial space activities, such as issuing launch licenses. Some stakeholders would like these activities to remain under AVS to keep them separate from commercial space launch activities, especially in the public eye. For example, three stakeholders we interviewed were worried that a crash involving a space support vehicle might negatively impact the entire commercial space transportation industry. Two stakeholders expressed concern that moving space support vehicles to AST would place these activities under the informed consent regime. Thus, accidents might impact the safety numbers and mar the image and thus of attractiveness of the future commercial space tourism industry. Other stakeholders we interviewed believe that AST would be a preferable FAA office to provide regulatory oversight to AVS. Specifically, 6 of the 25 stakeholders that expressed an opinion said that space support vehicles should be regulated under AST, citing the following reasons: Two stakeholders interviewed said that AST staff are familiar with the commercial space transportation industry and are therefore in the best position to determine if a certain vehicle is necessary for commercial space transportation activities. Five stakeholders interviewed said that they are already working with AST for other purposes, such as obtaining launch licenses, and would prefer to continue working with one office to streamline the process. Two of the stakeholders who preferred AST cited the office’s statutory informed-consent regime, which instead of prohibiting certain commercial space transportation activities, ensures that participants are made aware of the activity’s potential risks. In addition, some stakeholders were interested in a combined approach. Specifically, 6 of the 13 launch companies that we interviewed said that all space support vehicles should be regulated by AVS, except hybrid launch vehicles, which they would prefer to be regulated by AST. See discussion below on FAA’s proposal on the regulation of the aircraft portion of a hybrid launch system, when it is operating as a space support vehicle. Two stakeholders said that companies should have the choice to work with AVS or AST. Further, the Commercial Spaceflight Federation (CSF) has been discussing with its membership how space support vehicles should be regulated. CSF’s discussion is focusing primarily on the use of experimental aircraft, such as former military jets, for compensation and hire. CSF representatives indicated that some of its members’ view is that Congress should direct the Administrator of FAA to authorize spaceflight training flights through rulemaking, but these flights should remain under AVS. However, they noted that if a company has gone through the current difficult certification process for a certain capability, such as using a standard certified aircraft to provide periods of microgravity through a series of parabolic maneuvers, then companies should not need to use an exemption to replicate this service. Further, to help minimize safety risks if space support vehicle flights were to be authorized, these representatives said that they should begin and end at a spaceport. In addition, passengers should be notified that the aircraft are not certified as safe under AVS’s aircraft rules and are not licensed as space transportation—essentially an informed consent regime. According to CSF representatives, a benefit of this approach is that it should enable new capabilities that cannot exist within the current legal and regulatory framework while helping minimize safety risks. According to FAA officials, the commercial space transportation industry is evolving and AST and AVS have worked with companies individually to determine how they can legally operate within the current regulatory system. However, FAA officials acknowledge that this issue is potentially growing as more companies try to figure out how to cost-effectively provide what they see as a potential market—supporting commercial space transportation. Federal internal control standards state that conditions affecting an agency, such as FAA, and its environment continually change and that these changing conditions often prompt new risks or changes to existing risks that need to be assessed. FAA has taken steps to assess the licensing and permitting process for hybrid launch vehicles; however, it has not assessed whether space support vehicles are needed to meet the potential research, training, and other needs of the commercial space transportation industry, and if it should propose changes that would accommodate all aircraft that could be used as space support vehicles. FAA officials said that their views on non-launch and non-reentry operations of hybrid launch vehicles will be expressed in their report that was mandated under the U.S. Commercial Space Launch Competitiveness Act. However, this report only focuses on one type of vehicle—hybrid launch vehicles. As we described previously, hybrid launch systems contain elements of both aircraft and rocket-powered vehicles. These vehicles take off horizontally and then launch the spacecraft once reaching a certain altitude. Currently, the portion of a hybrid launch system that can operate like an aircraft is regulated as aircraft by AVS through experimental certificates when it is not engaged in a launch. While the U.S. Commercial Space Launch Competitiveness Act required FAA to report on approaches for streamlining the licensing and permitting process of non-launch, non-reentry operations of hybrid launch vehicles related to space transportation, it did not require FAA to develop a proposed regulatory framework. Although Congress did not require FAA to develop a regulatory framework for vehicles that could be used in support of space activities, federal internal control standards state that federal agencies should identify, analyze, and respond to significant changes that could impact the internal control system—the mechanism by which an entity’s oversight provides reasonable assurance that the agency’s objectives will be achieved. Further, they state that conditions affecting an agency, such as FAA, and its environment continually change and that these changing conditions often prompt new risks or changes to existing risks that need to be assessed and addressed. While FAA has taken steps to handle the safety issues of these vehicles on an individual basis and is assessing approaches for streamlining the licensing and permitting process of hybrid launch vehicles, it has not examined how its regulatory framework should change, if at all, to address the potential growth and related risks in the use of space support vehicles. Thus, some stakeholders we spoke to are delaying investments in space support vehicles. The U.S. commercial space-transportation industry has seen significant development in the past decade. As the industry evolves, companies are considering how to provide additional services to support the industry’s needs. While some companies are using certified aircraft to provide space support services such as spaceflight participant training, other companies would like to use vehicles such as retired military jets and hybrid launch vehicles to provide such services. FAA’s current regulatory framework applies to aircraft; however, the aircraft that some companies would like to use to provide space support services do not fit into this framework. As stakeholders recognized, a change in regulatory regimes may impact safety and streamline the regulatory process. For example, while FAA’s current regulations help ensure passenger safety, they also prevent some companies from providing space support services. While FAA has started considering this issue, especially for hybrid launch systems, it has not determined if space support vehicles are needed to meet the potential research, training, and other needs of the commercial space industry nor fully examined its current regulations as they relate to space support vehicles and determined and documented the results of its assessment. Since FAA has not conducted a comprehensive assessment of how space support activities fit under its aviation or commercial space transportation regulatory regimes, officials from some U.S. companies told us they are delaying investments in space support vehicles. As a result, it is uncertain if companies will be able to use space support vehicles for potentially useful spaceflight participant training and research services to meet the future needs of the commercial space-transportation industry. To respond to changes in the aviation and commercial space- transportation industries, we recommend that the Secretary of Transportation direct the FAA Administrator to fully examine and document whether the current regulatory framework is appropriate for aircraft that could be considered space support vehicles, and if not, suggest legislation or develop regulatory changes, or both, as applicable. We provided a draft of this report to the Department of Transportation for review and comment. DOT is not providing comments on the recommendation at this time, but will provide a detailed response to the recommendation within 60 days of the final report’s issuance. DOT provided technical comments, which we incorporated into the report, as appropriate. In addition, to verify information, we provided a draft of this report to NASA for review and comment. NASA provided technical comments, which we incorporated into the report, as appropriate. We are sending copies of this report to the Secretary of Transportation, the Administrator of the FAA, and the Administrator of NASA, as well as appropriate congressional committees and other interested parties. In addition this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-2834 or dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the individual named above, Cathy Colwell (Assistant Director), Stephanie Purcell (Analyst in Charge), Namita Bhatia- Sabharwal, Dave Hooper, Sara Ann Moessbauer, Amy Rosewarne, and Travis Schwartz made key contributions to this report. | As the commercial space transportation industry has grown significantly in the last decade, a related industry has emerged that plans to complement the commercial space industry by using vehicles called space support vehicles to conduct space-related activities, but not launch into space. The U.S. Commercial Space Launch Competitiveness Act of 2015 includes a provision for GAO to review the uses for space support vehicles and services and any barriers to their use. This report addresses stakeholder views on (1) potential uses for space support vehicles, (2) challenges that companies may face when attempting to use these vehicles, and (3) how these vehicles should be regulated. GAO reviewed prior GAO and industry reports, relevant laws and regulations, and interviewed officials on two proposals for regulating space support vehicles. GAO interviewed officials at FAA and the National Aeronautics and Space Administration and 37 legal experts and stakeholders from industry organizations, launch companies, space support companies, and spaceports—identified by agency and industry officials. Company officials GAO interviewed identified potential uses for “space support vehicles”—which include a variety of aircraft from high-performance jets to balloons and the aircraft portion of a hybrid launch systems (a vehicle that contains elements of both an aircraft and a rocket-powered launch vehicle)—but the size of the market for these uses is unclear. Company officials said they plan to use space support vehicles to train spaceflight participants and to conduct research in reduced gravity environments. For example, some company officials said they would like to use high-performance jets to train future spaceflight participants by exposing them to physiological and psychological effects encountered in spaceflight. Other company officials said they would like to use space support vehicles to research how objects or people react in reduced gravity environments. It is difficult to know the size of the market for spaceflight training and research as GAO found no studies on these markets. However, stakeholders said they expect interest in research to increase. Some company officials said the Federal Aviation Administration's (FAA) regulatory framework presents a market challenge because companies cannot get FAA approval to use the aircraft they would like to use to carry passengers or cargo for compensation, thus limiting their ability to operate in the market. FAA's Office of Aviation Safety (AVS) regulates aircraft that companies would like to use as space support vehicles by issuing standard and experimental certificates that help ensure safety. While officials from two companies GAO interviewed have received standard aircraft certification for their space support vehicle, others said the standard certification process is lengthy and not designed for the type of vehicles they would like to use, such as unique, single-production aircraft or retired military jets. In addition, FAA regulations do not allow companies to receive compensation for carrying people or property on an aircraft operating under an experimental certificate. As a result, some of the companies we interviewed have training operations in other countries where they can receive payment for the activity. Further, FAA's Office of Commercial Space Transportation (AST)—the office that regulates commercial space activities—is only authorized to regulate commercial space activities, such as launches, focusing on the safety of third parties. According to FAA officials, a statutory or regulatory change would be needed to allow companies to use space support vehicles that do not meet AVS's standard certification requirements for compensation. Stakeholders GAO interviewed have mixed views on how FAA should regulate space support vehicles; some companies believe the current regulatory approach is appropriate, while others believe the system should be changed in the face of new technology and commercial space development. While FAA has taken steps to assess the licensing and permitting process for hybrid launch vehicles, it has not assessed whether space support vehicles are needed and if it should propose changes that would accommodate all aircraft that could be used as space support vehicles. Thus, some U.S. company officials said they are delaying investments in space support vehicles, and therefore, it is uncertain if they will be able to use them to meet the future needs of the commercial space transportation industry. The Secretary of the Department of Transportation (DOT) should direct the FAA Administrator to fully examine and document whether the FAA's current regulatory framework is appropriate for space support vehicles and, if not, suggest legislative or regulatory changes, or both, as applicable. DOT provided technical comments; however, it did not comment on the recommendation at this time. |
VA deployed the first two of four releases of its long-term system solution by its planned dates, therefore providing improved claims-processing functionality to all regional processing offices, such as the ability to calculate original and amended benefit claims. In addition, the Agile process allowed the department the flexibility to accommodate legislative changes and provide functionality according to business priorities, such as housing rate adjustments. However, key features of the solution were not completed as intended in the second release because the department found some functionality to be more complex than anticipated. Specifically, interfaces to legacy systems and the conversion of data from systems in the interim solution were not completed as intended in the second release. Due to these delays, VA planned to reprioritize what functionality would be included in its third release. Also, for its fourth release, the department had reduced a significant planned functionality— veteran self-service capability. While VA intends to provide this capability after the fourth release within the long-term system solution or under a separate initiative, it is unclear what functionality will be delivered in the remaining two releases when it deploys the system in December 2010. In using an Agile approach for this initiative, VA is applying lessons learned and has taken important first steps to effectively manage the IT project by establishing a cross-functional team that involves senior management, governance boards, and key stakeholders. However, the department had not ensured that several key Agile practices were performed. Measurable goals were not developed and the project progressed without bidirectional traceability in its requirements. Additionally, in developing the system, VA did not establish a common standard and consistent definition for work to be considered “done” or develop oversight tools to clearly communicate velocity and the changes to project scope over time. Testing deficiencies further hindered VA’s assurances that all critical system defects would be identified. Until VA improves these areas, management does not have the visibility it needs to clearly communicate progress to stakeholders and estimate when future system capabilities will be delivered. Additionally, reduced visibility and unresolved issues in its development processes may result in the department continuing to remove functionality that was expected in future releases, thus delivering a system that does not fully and effectively support the implementation of education benefits as identified in the Post- 9/11 GI Bill. To help guide the full development and implementation of the Chapter 33 long-term solution, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Benefits to take the following five actions: establish performance measures for goals and identify constraints to provide better clarity in the vision and expectations of the project; establish bidirectional traceability between requirements and legislation, policies, and business rules to provide assurance that the system will be developed as expected; define the conditions that must be present to consider work “done” in adherence with agency policy and guidance; implement an oversight tool to clearly communicate velocity and the changes to project scope over time; and improve the adequacy of the unit and functional testing processes to reduce the amount of system rework. We received written comments on a draft of this report from the Secretary of Veterans Affairs and VA’s Assistant Secretary for Information and Technology. In the Secretary’s comments, reproduced in appendix II, VA concurred with three of our recommendations and did not concur with two recommendations. Specifically, the department concurred with our recommendation to establish performance measures for goals and identify constraints to provide better clarity in the vision and expectations of the project. VA noted that it plans to develop performance measures consistent with automating the Post-9/11 GI Bill by March 2011. While this is a positive step, as we noted, it is also important that the department identify any project or business constraints to better clarify the vision and expectations of the system. VA also concurred with our recommendation that it establish bidirectional traceability between requirements and legislation, policies, and business rules to provide assurance that the system will be developed as expected. The department stated that it plans to establish traceability between its business rules for the long-term solution and the legislation by June 2011. Additionally, VA concurred with our recommendation to define the conditions that must be present to consider work “done” in adherence with department policy and guidance. VA noted that the initiative’s fiscal year 2011 operating plan outlines these conditions at the project level and that it intends to clarify the definition at the working group level by December 2010. VA did not concur with our recommendation that it implement an oversight tool to clearly communicate velocity and the changes to project scope over time. The department indicated that development metrics and models had already been established and implemented to forecast and measure development velocity. In this regard, as our briefing noted, department officials stated that they previously reported project-level metrics during the first release, and based on lessons learned, decided to shift to reporting metrics at the development team level. While it is important that VA established the capability to track team-level metrics, it is also important to track and clearly report how changes to the system development at the team level affect the overall project-level scope over time. Specifically, without the overall velocity—a key mechanism under the Agile methodology—VA does not have the information to understand the expected effort to complete the total scope of work and the associated length of time to do so. The overall velocity provides an understanding of the complexity and difficulty in accomplishing tasks and provides VA management with information to better understand project risks. This visibility is a key concept of the Agile methodology that VA has chosen to implement for this project. Without this level of visibility in its reporting, management and the development teams may not have all the information they need to fully understand project status and generate the discussion and feedback necessary for continuous process improvement. Therefore, we continue to believe that our recommendation for VA to clearly communicate velocity and project scope changes can only strengthen the department’s development process to be more in line with Agile system development practices. VA also did not concur with our recommendation to improve the adequacy of the unit and functional testing processes to reduce the amount of system rework. While the department noted that its testing approach is compatible with Agile development, it also acknowledged in other technical comments the department provided that there were instances of inconsistencies of user stories for capabilities being marked “done” and the user stories we reviewed during the second release showed significant weaknesses in the quality of testing performed. While we agree that VA’s testing approach is consistent with Agile methodology, these weaknesses we identified demonstrate ineffective testing and the need for a consistent and agreed-upon definition of “done.” Further, the program officials noted that their approach focused on users identifying defects at the end of the release, which, as we have noted, can be problematic because it is difficult for users to remember all the items and parameters needed for functionality. Without increased focus on the quality of testing early in the development process, VA risks delaying functionality and/or deploying functionality with unknown defects that could require future rework that may be costly and ultimately impede the claims examiners’ ability to process claims efficiently. Therefore, we continue to believe that our recommendation to improve the adequacy of unit and functional testing is needed to improve the effectiveness of VA’s process called for in the Agile methodology. This would provide stakeholders greater assurance that functionality developed during each iteration is of releasable quality before it is presented to users as completed work in accordance with Agile system development practices. In addition, VA provided other technical comments on a draft of this report. In the comments, the department provided additional clarification on why there were delays to functionality and how they affected the release schedule. Specifically, the department stated that the governance structure it established for the initiative provided the necessary management, support, and prioritization of development efforts to balance desired functionality within the development challenges and constraint The department noted that, among other things, delays in the firs t two releases were a result of additional functionality prioritized and developed, such as housing rate adjustments and the ability to automatically generate letters for veterans as well as unanticipated challenges, such as the complexity of data conversion tasks. Further, it noted that decisions and prioritizations were primarily made to min impact on field offices and to support fall enrollment and that they impacted the development capacity to support the capabilities tha be developed in the third release. VA also offered oth comments which were incorporated as appropriate. s. Beyond the department’s comments on our recommendations, the Assistant Secretary for Information and Technology provided additional written comments, reproduced in appendix III, which noted concerns with our draft report. In these comments, the Assistant Secretary stated that the department believes we fell short of meeting the objectives for this repo by omitting key facts and presenting an unnecessarily negative view of VA’s status and effectiveness to Congress. In particular, the Assistant Secretary stated that VA had successfully converted all processing of new Post-9/11 GI Bill claims to the long-term solution prior to the commencement of the fall 2010 enrollment process and that processing with the new system has been nearly flawless. He added that Veterans Business Administration claims processors like the new system and find it easy and effective to use. We are encouraged to hear that the department that is experiencing positive results from the system development efforts have been accomplished. However, as noted in our briefing, system functionality that was envisioned to (1) provide self-service capabilities to as veterans and (2) end-to-end processing of benefits by December 2010 w deferred. Further, as the vision for its new education benefits system evolves, it is essential that the department documents these changes to ensure that its expecta aligned at all levels. tions and goals for the system are measurable and In addition, the Assistant Secretary stated that limited exposure to the Agile methodology possibly caused us to present incorrect assumptions as facts, such as our evaluation of the department’s testing. Our audit team, which included a trained ScrumMaster, examined the department’s use of Agile Scrum practices against documented and accepted methodologies and consulted with an expert in the field that is not only a ScrumMaster, but also an Agile Scrum trainer that has extensive experience in evalu ating Agile system development projects. At the initiation of our study, we discussed our evaluation approach with program officials and throughout the study, held meetings with them to ensure our full understanding of the department’s requirements and testing processes. We did not take issue with the Agile testing approach used by VA. However, we found deficiencies in testing. Further, we presented the results of our observations to program officials in June 2010, at which time they did not express any such concerns, or otherwise comment on our evaluation of the testing. Further, given the evolving nature of Agile system development, it is important to ensure that work that is presented as “done” and demonstrated to the users at the end of an iteration has undergone adequate testing to prevent inaccurate information from being provided. In addition to weaknesses we identified in the testing of select user stories, the department identified a number of defects during the development of the second release. In our view, VA has an opportunity to improve the adequacy of its unit and functional testing which occurs during each iteration to help identify and resolve any defects before any related functionality is presented to users as completed work at the end of the iteration. As we noted, the department agreed that they needed to clarify their definition of “done” and ensure it is being applied consistently. As such, the definition often includes fully tested functionality that has no defects. During our review, we observed on multiple occasions work being presented as “done” without having completed all testing. Improved testing up front can reduce the amount of defects found later in user acceptance testing and production that would require more costly rework. Further, the Assistant Secretary stated that the department believes that we missed a substantial opportunity to positively influence real change by not focusing on the fact that VA had adopted the Agile methodology after many failings with other IT systems development efforts that used waterfall development methodologies. We agree that VA has taken an important step toward improving its development capability and that it has developed significant segments of its new education benefits system with its new methodology. However, as we noted in our briefing, the department had not fully established metrics for its goals, which are essential to fully gauge its progress beyond past initiatives. While we believe that VA has made substantial progress in implementing a new process to develop its system, we stand by our position that there is still an opportunity for the department to improve its new development process in accordance with our recommendations. Doing so would further increase the likelihood that VA fully develops and delivers the end-to-end benefits processing capabilities envisioned to support the Post-9/11 and the needs of veterans. We are sending copies o committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. f this report to appropriate congressional If you or your staffs have questions about this report, please contact me at (202) 512-6304 or melvinv@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. In June 2008, Congress passed the Post-9/11 Veterans Educational Assistance Act of 2008 (commonly referred to as the Post-9/11 GI Bill or Chapter 33). This act amended Title 38 United States Code to include Chapter 33, which provides educational assistance for veterans and members of the armed forces who served on or after September 11, 2001. The Department of Veterans Affairs (VA) is responsible for processing claims for the Chapter 33 education benefit, which is a three-part benefit—tuition and fee payments, housing allowance, and book stipend. The benefit is determined based upon an individual’s aggregate qualifying active duty service. A key milestone in the Chapter 33 legislation was the requirement that VA provide the new educational assistance benefits to service members and veterans beginning on August 1, 2009. In considering its implementation of the legislation, the department concluded that it did not have a system capable of calculating the new benefit. As a result, the department undertook an initiative to modernize its education benefits processing capabilities. . 5001-5003, Jne 30, 2008. VA’s initiative to modernize its education benefits processing involved interim and long- term solutions to deliver new processing capabilities. According to the department, the interim solution was intended to augment existing processes by providing temporary applications and tools, such as a spreadsheet that aided claims examiners in manually collecting data from VA legacy systems and to calculate the new benefits. At the same time that it began the interim solution, the department also initiated a long- term solution that was intended to fully automate the manual processes for calculating education benefits for service members and veterans. Specifically, the long-term solution was intended to provide a system to replace the interim solution as well as provide automated interfaces with existing legacy systems. The department intended to complete enough of the functionality in the long-term solution to replace the interim solution by June 2010, and to include additional capabilities for full deployment of the new education benefits system by December 2010. legcy tem, mong other, inclde finncil pyment tem, n edtion informtion tem, nd veterdemogrphic nd ervice d tem. Thee legcy tem contin essentil informtion reqired for clcting the enefit, such as prior enefit pyment, demic intittion rte, nd veter’ ervice dte. To develop the system for its long-term solution, VA is relying on contractor assistance and is using an incremental development approach, called Agile software development, which is to deliver software functionality in short increments before the system is fully deployed. Agile software development has key practices such as working as one team. This one team is to define business priorities and, based on those priorities, deliver work in short increments. Each increment of work is inspected by the team and the project’s plans and priorities are adapted accordingly. Historically, VA has experienced significant IT development and delivery difficulties. In the spring of 2009, the department reviewed its inventory of IT projects and identified ones that exhibited serious problems with schedule slippages and cost overruns. The department noted that an incremental approach, such as Agile software development, was considered to be an effective way to support the long-term system solution development. oftwre development i not et of tool or ingle methodology, bu philoophy based on elected v such as, the highet priority i to satify customer throgh erly nd continus delivery of vuable oftwre, delivering working oftwre freqently, from cople of week to cople of month, nd tht working oftwre i the primry measure of progress. For more informtion on Agile development, ee http://www.gilellince.org. Given the importance of delivering education benefits to veterans and their families, we were asked to review the long-term solution to determine the status of VA’s development and implementation of its information technology (IT) system to support the implementation of education benefits identified in the Post-9/11 GI Bill and evaluate the agency’s effectiveness in managing its IT project for this initiative. To determine the status of VA’s development and implementation of IT system to support the implementation of education benefits identified in the Post-9/11 GI Bill, we reviewed VA and contractor plans for system development, observed project status meetings, and compared the actual status of development and implementation to the planned status, and discussed the department’s plans and implementation with VA and contractor officials to determine the functionality completed and demonstrated. . 5001-5003 nd Pub. L. No. 111-32, Sec. 1002. gement Accontability Stem (PMAS) Gide 1.0, Mrch 2010. 7 analyzed requirements and testing artifacts for 20 segments of system features developed to determine the traceability of requirements and testing coverage; observed key agency and contractor development meetings such as daily discussions, bi-weekly software reviews and planning meetings, where management decisions were made and Agile practices were demonstrated; and interviewed department and contractor officials about the management and oversight of requirements, testing, and transition plans. The information on cost estimates and costs that were incurred for long-term solution development were provided by VA officials. We did not audit the reported costs and thus cannot attest to their accuracy or completeness. We conducted this performance audit at the Department of Veterans Affairs headquarters in Washington, D.C., and at a contractor facility in Chantilly, Virginia, from November 2009 to September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. VA has developed and implemented the first two of four releases of software planned for its new education benefits processing system as scheduled, with these deployments occurring on March 31, 2010, and June 30, 2010. In doing so, VA provided its regional processing offices with key automated capabilities to prepare original and amended benefit claims. In addition, VA responded to legislative changes and provided for housing rate adjustments. While VA did not previously estimate costs for these releases and, as such, could not track estimated to actual costs, it has reported that about $84.6 million has been obligated through July 2010. The department noted that its Agile process allowed the flexibility to adapt to legislative changes and provide functionality according to business priorities. However, VA did not ensure that certain critical tasks were performed that were expected to be part of the second release. Specifically, it did not complete the conversion of data from systems in the interim solution to the systems developed for the long-term solution and did not complete the development of interfaces between the new system and legacy systems. nned to complete interfce to ll legcy tem except for it finncil pyment tem, which i plnned for the third release. Further, because of these delays, VA is in the process of determining and prioritizing what functionality will be developed in its third release by September 30, 2010. For its fourth release, it intends to reduce its planned functionality of providing full self-service capabilities to veterans by December 31, 2010. However, VA intends to provide this capability after its fourth release or under a separate initiative. As such, VA estimates that the total system development actual and planned obligations through 2011 will be about $207.1 million. er repreent ctual expenditre, oligted fnd, nd plnned oligted fnd throgh fil yer 2011. VA has demonstrated key Agile practices that are essential to effectively managing its system development, but certain practices can be improved. Specifically, the department has ensured that teams represent key stakeholders and that specific Agile roles were fulfilled. For example, the teams consist of subject matter experts, programmers, testers, analysts, engineers, architects, and designers. The department has also made progress toward demonstrating the three other Agile practices—focusing on business priorities, delivering functionality in short increments, and inspecting and adapting the project as appropriate. However, VA can improve its effectiveness in these areas. Specifically: To ensure business priorities are a focus, a project starts with a vision that contains, among other things, purpose, goals, metrics, and constraints. In addition, it should be traceable to requirements. VA has established a vision that captures the project purpose and goals; however, it has not established metrics for the project’s goals or prioritized project constraints. VA officials stated that project documentation is evolving and they intend to improve their processes based on lessons learned; however, until it identifies metrics and constraints, the department will not have the means to compare the projected performance and actual results of this goal. VA has also established a plan that identifies how to maintain requirements traceability within an Agile environment; however, the traceability between legislation, policy, business rules, and test cases was not always maintained. VA stated its 11 requirements tool did not previously have the capability to perform this function and now provides this traceability to test cases. Nonetheless, if VA does not ensure that requirements are traceable to legislation, policies, and business rules, it has limited assurance that the requirements will be fully met. To aid in delivering functionality in short increments, defining what constitutes completed work (work that is “done”) and testing functionality is critical. However, VA has not yet established criteria for work that is considered “done” at all levels of the project. Program officials stated that each development team has its own definition of “done” and agreed that they need to provide a standard definition across all teams. If VA does not mutually agree upon a definition of “done” at each level, confusion about what constitutes completed work can lead to inconsistent quality and it may not be able to clearly communicate how much work remains. In addition, while the department has established an incremental testing approach, the quality of unit and functional testing performed during Release 2 was inadequate in 10 of the 20 segments of system functionality we reviewed. Program officials stated that they placed higher priority on user acceptance testing at the end of a release and relied on users to identify defects that were not detected during unit and functional testing. Until the department improves testing quality, it risks deploying future releases that contain defects which may require rework. tht the delivery of completed oftwre e defined, commonly referred to as the definition of “done.” Thi criticl to the development process to help ensure tht, mong other thing, teting has een dequately performed nd the reqired docmenttion has een developed. In order for projects to be effectively inspected and adapted, management must have tools to provide effective oversight. For Agile development, progress and the amount of work remaining can be reflected in a burn-down chart, which depicts how factors such as the rate at which work is completed (velocity) and changes in overall product scope affect the project over time. While VA has an oversight tool that shows the percentage of work completed to reflect project status at the end of each iteration, it does not depict the velocity of the work completed and the changes to scope over time. Program officials stated that their current reporting does not show the changes in project scope because their focus is on metrics that are forward looking rather than showing past statistics for historical comparison. However, without this level of visibility in its reporting, management may not have all the information it needs to fully understand project status. To help ensure successful implementation of the Chapter 33 initiative, we are recommending that VA establish performance measures for goals and identify constraints; establish traceability between requirements and legislation, policies, and business rules; define the conditions that must be present to consider work “done;” review and improve the unit and functional testing processes; and implement an oversight tool to clearly communicate velocity and the changes to project scope over time. We received oral comments on a draft of this briefing from VA officials, including the Deputy Assistant Secretary for Congressional and Legislative Affairs and the Assistant Secretary for Information and Technology. In their comments, the officials stated that the department was not in a position to concur or not concur with our recommendations, but planned to provide formal comments on our final report. The officials also provided technical comments, which we have incorporated in the briefing as appropriate. In recognition of their service to our country, VA provides medical care, benefits, social support, and lasting memorials to veterans, service members, and their families. VA is the second largest federal department with more than 270,000 employees. In fiscal year 2009, the department reported incurring more than $100 billion in obligations for its overall operations. The Veterans Benefits Administration (VBA), one of VA’s three line administrations, provides assistance and benefits, such as educational assistance, through four veterans’ regional processing offices. In 2009, the department reported that it provided more than $3.5 billion in educational assistance benefits to approximately 560,000 individuals. In 2011, it expects the number of all education claims to grow by 32 percent over 2009, increasing from 1.7 million to 2.25 million. two other line dminitrtion re the Veter Helth Adminitrtion nd the Ntionl Cemetery Adminitrtion. l processing office re locted in Atlnt, Georgi; Bfflo, New York; Muskogee, Oklhom; nd St. Lo, Missri. 16 insufficient to support the demands for processing the new benefit. In October 2008, VA established its Chapter 33 initiative to develop the capability to process the new education benefit. The initiative involved both an interim and long-term solution: The interim solution, deployed in November 2009, provided applications and tools, such as a spreadsheet that aided claims examiners in manually collecting data from VA legacy systems to calculate the new education benefit. The long-term solution was expected to be complete enough to replace the interim solution by June 2010 and to include additional capabilities to provide a fully automated end-to-end system to support the delivery of education benefits by December 2010. Among other features, by December 2010, the new education benefits system was to modernize processing of new Chapter 33 awards and amended existing Chapter 33 claims, to include automated calculations of benefits, such as tuition and fee payments, housing allowance, and book stipends; increase claims processing efficiency to all regional offices, such as providing capability to automatically access veteran demographic and service data; interface with VA’s existing legacy systems that contain information required to calculate benefits, such as a financial payment system; and create veteran self-service capabilities such as the capability to estimate and apply for benefits online. To oversee the development and implementation of the new education benefits system, VA has formed a governance structure that includes executive level management from VBA and the department’s Office of Information and Technology (OI&T). The VBA Under Secretary of Benefits has primary responsibility for coordinating the Chapter 33 initiative. For example, the Under Secretary ensures collaboration for the effective management and coordination of VA resources in support of the Chapter 33 implementation. To develop and implement the long-term solution, VA’s OI&T entered into an interagency agreement with the Department of Defense’s Space and Naval Warfare Systems Center– Atlantic (SPAWAR) to develop the long-term solution. SPAWAR is managing multiple contractors to develop the system and is providing technical, information assurance, and program management services. SPAWAR is also providing operational services and engineering, planning, and analysis to support application development. VA and SPAWAR work together to manage and develop the system. Specifically, VBA subject matter experts and OI&T technical representatives are part of the system development teams. , contrctor such as Agilex Technologie, Inc., Booz Allen Hmilton, GeoLogic, nd Lockheed Mrtin, support the Chpter 33 tem development. gement Accontability Stem (PMAS) Gide 1.0, Mrch 2010. Provide improved claims-processing functionality, such as the ability to calculate new original awards, amend awards, and convert beneficiary data from systems supporting the interim solution to the new system. To be deployed to a limited number of claims examiners in the regional processing offices. Increase automation and efficiency to all regional processing offices, as well as develop interfaces to legacy systems (excluding the financial system). Develop an interface between the new system and the department’s legacy financial system. Provide other end user features to further improve processing efficiencies, such as self-service functionality aimed at improving the veteran’s experience. timte doe not inclde mintennce coast the end of fil yer 2011 ecause progrm offici ted thi will budgeted nder different VBA inititive. rnegie Mellon Softwre Engineering Intitte, Mry Ann Lphm, et l., Considerations for Using Agile in DOD Acquisition (Pittsburgh, Penn., April 2010). Product owner. The product owner’s primary duties include making sure that all team members are pursuing a common vision for the project, establishing priorities so the highest-valued functionality is always being worked on, and making decisions that lead to a good return on investment. rief hitory on itertive nd incrementl development nd the origin of Agile method, ee Crnegie Mellon Softwre Engineering Intitte, Hillel Glzer, et l., CMMI® or Agile: Why Not Embrace Both! (Pittsburgh, Penn., Novemer 2008). nifeto was written nd igned grop of methodologi, who clled themelve the Agile Allince. Basic principle re et forth in thi docment nd inclde, for exmple, thbusiness people nd developerust work together dily nd throghot the project. For more informtion on the cretion of the Agile Mnifeto, ee http:// gilemnfeto.org/hitory.html. ton, Mass.: Peon Edtion, Inc., 2010); Agile Estimating and Planning (Upper Saddle River, N.J.: Peon Edtion, Inc., 2006); User Stories Applied (Boton, Mass.: Peon Edtion, Inc., 2004); nd Ken Schwaber, Agile Project Management with Scrum (Redmond, Wash.: Microoft Press, 2004). analysts, database engineers, usability experts, technical writers, architects, and designers. The team members are responsible for developing high-quality functionality as prioritized by the product owner. Project manager. The project manager focuses more on leadership than on management and is a facilitator for the team working together. In addition, he or she is responsible for removing project obstacles that may impede the team’s performance. Additionally, best practices state that it is essential for a systems development team to have involvement from other stakeholders, such as executive level management and senior management. Such involvement helps to minimize project risk by ensuring that key requirements are identified and developed, problems or issues are resolved, and decisions and commitments are made in a timely manner. titte of Electricnd Electronic Engineer (IEEE), Stem nd oftwre engineering – oftwre life cycle process, IEEE Std. 12207-2008, (Piy, N.J., Juary 2008) nd Crnegie Mellon Softwre Engineering Intitte, CMMI for Development, Verion 1.2, CMU/SEI-2006-TR-008 (Pittsburgh, Penn., Aust 2006). of implementing Agile v Scrm. For more informtion on the Scrpproch ee http://www.crllince.org/. ne 24, 2009, mended the Pot-9/11 Edtionl Assnce Act of 2008 dding the Mrine Gnnery Sergent John Dvid Fry Scholhip (ee 38 U.S.C. § 3311), which inclde in the ct enefit for the children of ervice memer who died in the line of dty on or fter Sept. 11, 2001. Eligile children ttending chool my receive p to the highet public, in-te ndergruate tition nd fee, plus monthly living tipend nd ook llownce nder the progrm. titte of Electricnd Electronic Engineer (IEEE), Stem nd oftwre engineering – oftwre life cycle process, IEEE Std. 12207-2008, (Piy, N.J., Juary 2008) nd Crnegie Mellon Softwre Engineering Intitte, CMMI for Development, Verion 1.2, CMU/SEI-2006-TR-008 (Pittsburgh, Penn., Aust 2006). of the decision-making bodies in the governance structure, see attachment III.) The department has also established multiple, cross-functional teams to develop the system. These teams consist of VA subject matter experts as well as contractors that are programmers, testers, analysts, database engineers, architects, and designers. These teams hold daily Scrum meetings to discuss work that has been planned and accomplished, and any impediments to completing the work. At the completion of each iteration, which in VA’s case is every 2 weeks, a review meeting occurs between the cross-functional teams and VA stakeholders to review and demonstrate completed system functionality. Following this meeting, planning sessions are held to discuss the work to be accomplished in the next iteration based on the next highest-prioritized requirements contained in user stories. In addition, VA has identified project managers from both VA and SPAWAR that focus on leadership of the initiative. These project managers monitor and facilitate meetings and provide clarification to contractors, subject matter experts, and other developers. They are also responsible for addressing impediments discussed at the review meetings. With this involvement from key stakeholders, VA has established a team structure that fulfills the key roles within an Agile team and has better positioned itself to effectively manage the initiative. rnegie Mellon Softwre Engineering Intitte, Cability Mrity Model® Integrtion for Development, Verion 1.2 (Pittsburgh, Penn., Aust 2006), nd Softwre Acqition Cability Mrity Model® (SA-CMM®) verion 1.03, CMU/SEI-2002-TR-010 (Penn., Mrch 2002); nd the Intitte of Electricnd Electronic Engineer (IEEE), 1362-1998, IEEE Gide for Informtion Technology— Stem Definition— Concept of Opertion Docment (New York, N.Y.,1998). intining idirectionl reqirement trceability me thtem-level reqiremente trced oth backwrd to higher-level business or opertionl reqirement, nd forwrd to tem deign pecifiction nd tet pl. rtment offici noted tht prior to thi pgrde, they were able to eablih trceability to tet casually. er 1998); Information Technology: Customs Automated Commercial Environment Progressing, but Need for Management Improvements Continues, GAO- 05-267 (Washington, D.C.: Mr. 14, 2005); nd Homeland Security: Visitor and Immigrant Status Program Operating, but Management Improvements Are Still Needed, GAO-06-318T (Washington, D.C.: Jn. 25, 2006). presented user stories as “done” without having completed all testing. Program officials stated that each development team has their own definition of “done” and agreed that they need to provide a standard definition across all teams. If VA does not mutually agree upon and document this definition at each level and ensure it conforms to the department’s standards, conventions, and guidelines, confusion about what constitutes completed work could lead to inconsistent quality and unreliable performance and progress reporting. Further, in the absence of an agreed-upon definition, VA is not able to clearly communicate how much work remains for completing the system. rther informtion on nit nd fnctionl teting, ee GAO, Indian Trust Funds: Challenges Facing Interior’s Implementation of New Trust Asset and Accounting Management System, GAO/T-AIMD-99-238 (Washington, D.C.: Jl. 14, 1999) nd GAO, Financial Management Systems: Additional Efforts Needed to Address Key Causes of Modernization Failures, GAO-06-184 (Washington, D.C.: Mrch 27, 2006). er were reported as of Jne 29, 2010. Progrm offici decried high-priority defect as defect tht cold “rek” the tem nd muse fixed. tion on how defect result in nplnned rework nd increased co, ee GAO-06-184. m offici ted tht they hd previously used burn-down chrt thhowed velocity for ll te in Release 1. However, in Release 2, they decided tht they wold provide burn-down chrt t the tem level, but not t the overll project level. To help guide the development and implementation of the Chapter 33 long-term solution, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Benefits to take the following five actions: establish performance measures for goals and identify constraints to provide better clarity in the vision and expectations of the project; establish bidirectional traceability between requirements and legislation, policies, and business rules to provide assurance that the system will be developed as expected; define the conditions that must be present to consider work “done” in adherence with agency policy and guidance; improve the adequacy of the unit and functional testing processes to reduce the amount of system rework; and implement an oversight tool to clearly communicate velocity and the changes to project scope over time. 48 Agency Comments and Our Evaluation We received oral comments on a draft of this briefing from VA officials, including the Deputy Assistant Secretary for Congressional and Legislative Affairs and the Assistant Secretary for Information and Technology. In the comments, the Deputy Assistant Secretary stated that the department was not in a position to concur or not concur with our recommendations but planned to provide formal comments on our final report. The officials provided additional clarification on why the department experienced delays in data conversion. Specifically, they noted that, consistent with Agile practices, the department reprioritized work and adapted the system to add selected functionality, such as the 2010 housing rate adjustments. They added that the Joint Executive Board had made this decision to ensure that claims examiners would have the most recent rate to process benefits for the fall 2010 enrollment season. Additionally, the department recognized lessons learned with the Agile approach, and it intends to incorporate them in future development work. The officials provided other technical comments, which we have incorporated as appropriate. In further comments, the Assistant Secretary for Information and Technology emphasized that using Agile system development for this initiative allowed the department to provide significant system functionality incrementally that had far exceeded its past IT initiatives. Specifically, he noted that the project had delivered working software close to schedule and had been more successful than past system development efforts. Funds obligated and transferred to SPAWAR but not yet expended Planned and obligated costs to complete Release 3 (VA and SPAWAR program costs) Planned but not obligated FY2011 cost to complete Release 4 (VA and SPAWAR program costs) Co-chaired by the Under Secretary for Benefits and the Assistant Secretary for Information and Technology, this senior governing body provides executive-level oversight and strategic guidance for implementation of the initiative. It is responsible for ensuring that communications, strategies, planning, and deliverables enable the initiative to meet its mission, goals, and objectives. Co-chaired by the Director of Education Service and the Program Manager, the Steering Committee advises the Joint Executive Board on requirements, policies, and standards. It is responsible for the oversight of program planning and execution to ensure that the strategic vision is incorporated into the business operations. Co-chaired by the Leader of the Veterans Benefits Administration (VBA) Education Service Program Executive Office and the Dependency Lead, Office of Information and Technology, Chapter 33 Program Management Office, the Working Group provides oversight and governance to workgroups leading programmatic and technical interests of the initiative. It defines and prioritizes business requirements, identifies and escalates issues and risks, and makes recommendations to the Executive Steering Committee on which requests to approve and resource. Eight workgroups, led by Education Service and Office of Information and Technology staff, provide daily operations management and ensure that requirements areas are identified and defined for each of the following areas: Benefits Delivery Network/Financial Accounting System, Business Requirements, Certification and Accreditation/Security, Infrastructure, Interfaces, Strategic Planning, Training, and the Security Review Board. In addition to the contact named above, key contributions to this report were made by Christie M. Motley, Assistant Director; Rebecca E. Eyler; David A. Hong; Ashley D. Houston; John C. Martin; and Charles E. Youman. | The Post-9/11 GI Bill was signed into law in June 2008 and provides educational assistance for veterans and members of the armed forces who served on or after September 11, 2001. The Department of Veterans Affairs (VA) is responsible for processing claims for these new education benefits. VA concluded that its legacy systems and manual processes were insufficient to support the new benefits and, therefore, began an initiative to modernize its benefits processing capabilities. The long-term solution was to provide a fully automated end-to-end information technology (IT) system to support the delivery of benefits by December 2010. VA chose an incremental development approach, called Agile software development, which is intended to deliver functionality in short increments before the system is fully deployed. GAO was asked to (1) determine the status of VA's development and implementation of its IT system to support the implementation of education benefits identified in the Post-9/11 GI Bill and (2) evaluate the department's effectiveness in managing its IT project for this initiative. VA has made important progress in delivering key automated capabilities to process the new education benefits. Specifically, it deployed the first two of four releases of its long-term system solution by its planned dates, thereby providing regional processing offices with key automated capabilities to prepare original and amended benefit claims. In addition, the Agile process allowed the department the flexibility to accommodate legislative changes and provide functionality according to business priorities. While progress has been made, VA did not ensure that certain critical tasks were completed that were initially expected to be included in the second release by June 30, 2010. For example, the conversion of data from systems in the interim solution to systems developed for the long-term solution was not completed until August 23, 2010. Because of the delay, VA planned to reprioritize the functionality that was to be included in the third release. Further, while VA plans to include full self-service capabilities to veterans, it will not do so in the fourth release as scheduled; instead it intends to provide this capability after the release or in a separate initiative. VA reported obligations and expenditures for these releases, through July 2010, to be approximately $84.6 million, with additional planned obligations of $122.5 million through fiscal year 2011. VA has taken important steps by demonstrating a key Agile practice essential to effectively managing its system development--establishing a cross-functional team that involves senior management, governance boards, key stakeholders, and distinct Agile roles. In addition, VA made progress toward demonstrating three other Agile practices--focusing on business priorities, delivering functionality in short increments, and inspecting and adapting the project as appropriate. Specifically, to ensure business priorities are a focus, VA established a vision that captures the project purpose and goals and established a plan to maintain requirements traceability. To aid in delivering functionality, the department established an incremental testing approach. It also used an oversight tool, which was intended to allow the project to be inspected and adapted by management. However, VA could make further improvements to these practices. In this regard, it did not (1) establish metrics for the goals or prioritize project constraints; (2) always maintain traceability between legislation, policy, business rules, and test cases; (3) establish criteria for work that was considered "done" at all levels of the project; (4) provide for quality unit and functional testing during the second release, as GAO found that 10 of the 20 segments of system functionality were inadequate; and (5) implement an oversight tool that depicted the rate of the work completed and the changes to project scope over time. Until VA improves these areas, management will lack the visibility it needs to clearly communicate progress and unresolved issues in its development processes may not allow VA to maximize the benefits of the system. To help guide the full development and implementation of the long-term solution, GAO is recommending that VA take five actions to improve its development process for its new education benefits system. VA concurred with three of GAO's five recommendations and provided details on planned actions, but did not concur with the remaining two. |
Research has been a part of the Forest Service’s mission since the agency’s creation in 1905, and several Forest Service research facilities date back to the early 1900s. FS R&D’s research and development activities take place within seven research stations (see fig. 1). Five of the seven are focused regionally, with each covering a multistate region; these are the Pacific Northwest, Pacific Southwest, Rocky Mountain, Northern, and Southern research stations. In contrast, the remaining two stations—the Forest Products Laboratory and the International Institute of Tropical Forestry—are not regionally focused but, rather, concentrate on specific research topics. The stations are decentralized, with the director of each station reporting directly to the Chief of the Forest Service. According to FS R&D officials, the geographic alignment of these stations helps foster understanding of, and focus research attention on, issues of local or regional significance. For example, large urban concentrations in the area covered by the Northern Research Station make urban forestry and social science a research priority. Within each of the five geographically based research stations, multiple laboratories carry out specific research activities. In addition, FS R&D maintains 81 experimental forests and ranges across the nation, which serve as sites for most of the agency’s long-term research. These sites—which range in size from about 115 acres to over 55,000 acres—represent most of the nation’s major forest ecosystems. With some sites dating back to the early 1900s, they have allowed FS R&D to compile long-term data about how forests respond to changes in land use, climate, and various natural and human-caused disturbances. FS R&D’s work is carried out by research scientists, technicians, and other professionals, using techniques from a diverse set of disciplines. The mission of FS R&D is multifaceted. In developing and delivering knowledge and innovative technology, the agency is responsible both for long-term, basic research to increase scientific knowledge and for applied research and science delivery as a means of disseminating that knowledge to potential end users. In addition, FS R&D’s mission includes the nation’s forests and rangelands, both public and private. While much of FS R&D’s role is to support the Forest Service in managing national forests, its research and science delivery activities are also to include issues related to forests and ranges on other federal lands, as well as nonfederal lands managed by states or private landowners. In addition to funds appropriated to the Forest Service by Congress, FS R&D uses funds provided by external sources to conduct research and development and often collaborates with external entities in carrying out its work. FS R&D is authorized to do so by the Forest and Rangeland Renewable Resources Research Act of 1978—the primary legislation authorizing FS R&D’s activities—which states that, in implementing the act, FS R&D may cooperate with federal, state, and other governmental agencies; with public or private agencies, institutions, universities, and organizations; and with businesses and individuals in the United States and in other countries. The act allows the Secretary of Agriculture to receive money and other contributions from cooperators under such conditions as the Secretary may prescribe. In addition to FS R&D, a number of other agencies focus on natural resource issues and may therefore also conduct research on forest issues. These agencies include, among others, the U.S. Geological Survey (USGS) within the Department of the Interior, the National Oceanic and Atmospheric Administration within the Department of Commerce, and the Environmental Protection Agency. Other agencies may also conduct forest-related research although their main focus is not on natural resource issues. For example, the National Aeronautics and Space Administration maintains an Earth science program intended to, among other aims, improve the prediction of climate, weather, and natural hazards including wildland fire. The scope of FS R&D’s work spans a range of research activities related to forests and rangelands, from collecting basic data on forest species to studying societal values in relation to land use. Agency officials and other stakeholders we spoke with attested to FS R&D’s accomplishments over time, which run the gamut from basic data about the condition of the nation’s forests to research and tools useful in managing wildland fire and invasive species, and also noted areas that could benefit from additional research. FS R&D’s research addresses national and regional priorities, as well as areas of international concern. FS R&D disseminates the results of its research in many ways, including publication in peer-reviewed journals and other technical and general publications, creation of computer-based modeling tools, and workshops and other outreach activities. Through its funding allocation process, as well as central reviews of the stations’ research agendas, FS R&D headquarters seeks to ensure that research activities are consistent with the agency’s overall goals. FS R&D’s national and regional research aims have evolved over time to mirror shifts in the mission of the Forest Service as a whole. In the years after World War II, for example, the amount of timber harvested from national forests increased dramatically, and much of FS R&D’s work focused on supporting management of the nation’s forests for wood production and on the use of forest products. More recently, the Forest Service has emphasized maintaining and restoring land health, and, according to agency officials, FS R&D’s emphasis has likewise shifted toward the functioning of whole ecosystems, including air and water quality, biological diversity, and climate change. This widening of research focus, according to FS R&D officials, encourages scientists and managers to work together across land ownership boundaries and support a landscape-scale approach to land management, which includes an increased emphasis on urban forestry. FS R&D also conducts research in emerging areas such as climate change and nanotechnology. FS R&D’s strategic plan provides goals to help the agency set priorities for its various research efforts and identify future program direction; this plan is linked to broader strategic plans both at the Forest Service and department level, as well as to plans developed by the research stations. According to agency planning documents, FS R&D has organized its research into the following seven “strategic program areas”: Invasive Species provides scientific information, methods, and technology to reduce, minimize, or eliminate the introduction, establishment, spread, and impact of invasive species and to restore ecosystems affected by these species. Inventory and Monitoring provides resource data, analysis, and tools for identifying current status and trends of forests; management options and impacts, including modeling of forest conditions under various management scenarios; and threats from fire, insects, disease, and other natural processes. Outdoor Recreation develops knowledge and tools to support informed recreation and wilderness management decisions that improve outdoor recreation opportunities for current and future generations while sustaining healthy ecosystems. Resource Management and Use provides a scientific and technological base to sustainably manage and use forest resources and forest fiber-based products. Water, Air and Soils informs the sustainable management of these resources through information on how to provide clean air and drinking water, protect lives and property from wildfire and smoke, and improve the ability to adapt to climate variability and change. Wildland Fire provides knowledge and tools to help reduce the negative impacts, and enhance the beneficial effects, of wildland fire on society and the environment. Wildlife and Fish informs policy initiatives affecting wildlife and fish habitat on private and public lands and the recovery of threatened or endangered species. According to the agency, categorizing its research activities into these program areas has helped FS R&D report its accomplishments; plan research investments; organize areas of research for external peer review; improve agency accountability; and offer researchers more opportunity for interaction along broader, interdisciplinary topics. According to agency officials, there are also five “emerging research areas” which cut across the seven strategic program areas and help the agency set research priorities. These emerging areas are (1) biomass and bioenergy, (2) climate change, (3) urban natural resources stewardship, (4) watershed management and restoration, and (5) nanotechnology. In addition to these emerging areas, the agency considers two long-standing programs to be “foundations” underpinning much of its research activities: the Forest Inventory and Analysis (FIA) program, a periodic census of the nation’s forest lands, and the network of 81 experimental forests and ranges the agency maintains. FS R&D uses the strategic program areas to categorize its research nationwide, with each of the seven research stations also having a specific set of research programs based on regional priorities. For example, the Rocky Mountain Research Station has organized its work largely to reflect ecosystems and environments, with research areas covering forest and woodland ecosystems; grassland, shrubland, and desert ecosystems; wildlife and terrestrial ecosystems; and air, water, and aquatic environments. Station officials told us that organizing its research in this way reflects the interdisciplinary nature of the station’s research. In contrast, the Southern Research Station has several research areas devoted to issues of regional interest in the South, including southern pine ecology; insects, diseases, and invasive plants of southern forests; and restoring longleaf pine ecosystems. Appendix II lists the research programs and locations of all seven research stations. According to FS R&D officials, research itself is generally carried out at individual laboratories maintained by the research stations, with the laboratories often focusing on specific research topics in a variety of settings. For example, among the Pacific Southwest Research Station’s laboratories are the Forest Fire Laboratory in southern California, which focuses on fire science, air quality, and recreation, and the Institute of Pacific Islands Forestry in Hawaii, which focuses on preserving and restoring ecosystems throughout the Pacific islands. Likewise the Rocky Mountain Research Station conducts its research into grassland, shrubland, and desert ecosystems in laboratories located in diverse areas including Moscow, Idaho; Reno, Nevada; and Albuquerque, New Mexico. According to FS R&D officials and scientists, research carried out by the stations is often of broad interest. For example, the Southern Research Station’s Forest Operations unit in Auburn, Alabama, conducts research on harvesting timber and other forest products, and all five geographically based research stations study wildland fire. Similarly, the two topically oriented research stations, the Forest Products Laboratory and the International Institute of Tropical Forestry, conduct research whose subject matter is of national or international interest not limited to any geographic area. For example, the Forest Products Laboratory studies wood preservatives, wood products such as plywood, techniques for using woody biomass, and other topics of nationwide interest, while the International Institute of Tropical Forestry examines issues, such as restoration of degraded tropical forests, of international interest. Although some research is carried out solely by FS R&D researchers, it is also often done in collaboration with other entities, such as universities, nongovernmental organizations, or other federal research agencies. FS R&D officials and others we spoke with told us that such partnerships are valuable for several reasons. First, the partnerships are essential for FS R&D to carry out the full scope of its work because they allow the agency to take advantage of scientific expertise and facilities that it does not maintain on its own and that would be costly and potentially duplicative to develop. Second, by promoting interest and expertise outside FS R&D in certain issues, such relationships can stimulate partners to carry out additional research without FS R&D involvement—especially when additional research on a particular topic is needed but the agency does not have the resources necessary to continue. Finally, by working with other research entities and land management agencies, FS R&D can broaden the scope of its research to include a landscape-scale approach to land management issues. For example, the Tahoe Science Consortium was formed to promote science to help preserve, restore, and enhance the Lake Tahoe Basin in California and Nevada. It involves multiple research entities, including FS R&D, USGS, and the University of Nevada, Reno; land management agencies such as the Forest Service and the Department of the Interior; the state of Nevada; and others. To disseminate the results of its work, FS R&D engages in multiple science delivery activities, including publishing its work both in peer-reviewed journals and in less technical media, such as handbooks, research station newsletters, and Web sites. For example, FS R&D operates an online tool known as Treesearch, which allows users, including the general public, to identify and obtain FS R&D research publications. FS R&D also works directly with land managers, state and local government officials, and others to provide information about FS R&D’s work and how it can be used to help make decisions related to land management and policy. It also develops computer models and other tools that can be used in day-to- day land management activities. According to agency officials, responsibility for science delivery varies across FS R&D. Some research stations have a unit dedicated to science delivery, such as the Northern Research Station’s Northern Science, Technology, and Applied Results program, or NorthSTAR, while others do not. Regardless, individual researchers are still expected to take responsibility for some science delivery activities related to their research. The decentralized nature of the research stations, as well as the variety of work they conduct, increases the importance of central oversight of agency research to help ensure that research activities conducted at the stations align with the priorities of the agency as a whole. FS R&D officials told us they align research pursued at each research station with the agency’s overall research agenda by requiring each of the station’s research areas—generally known as research programs or research work units—to have a charter or research work unit description laying out its research plans and objectives. These charters and descriptions are centrally reviewed by FS R&D program officials to ensure consistency with the agency’s research agenda. FS R&D officials also use the funding allocation process to ensure that priority research areas are addressed. FS R&D headquarters officials told us the agency has the flexibility to allocate funding among research stations and programs in response to changing and emerging needs because FS R&D’s funding comes to the agency primarily through a single appropriation account, Forest and Rangeland Research, in contrast to the multiple accounts that were used in the past. In allocating funds to the research stations, the Forest Service’s Research Executive Team—consisting of the FS R&D Deputy Chief, station directors, and Washington office program directors—considers the priorities and goals outlined in the agency’s strategic plans as well as priorities identified by individual scientists and the research stations, making it both a top-down and a bottom-up process, according to one executive team official. By balancing emphasis on emerging needs at the national and regional levels with research needs identified by scientists in the field, this official explained, FS R&D remains nimble enough to respond to emerging issues while maintaining basic, long-term research. Agency officials also told us that although FS R&D generally seeks to maintain year-to-year stability in its research and personnel, no FS R&D program or project is entitled to its previous year’s budget. In addition, while FS R&D headquarters allocates most funding directly to the research stations, it retains a small portion of funding to award the stations through a competitive process, under which stations compete for FS R&D funds to study current topics such as climate change. While these funds represent a small fraction of the stations’ overall budgets, the process encourages stations to prepare research proposals that demonstrate the extent to which their research agendas align with FS R&D’s overall research objectives. One research station director, for example, commented that forcing stations to compete for research dollars prevents the stations from simply continuing past funding and research practices and “doing the same thing we’ve done for 30 years.” According to stakeholders we spoke with, including federal and state land managers, university researchers, and others, FS R&D’s accomplishments have been many and varied, and include efforts in both basic and applied research. Many of these same accomplishments were also identified by agency researchers and officials as being significant for FS R&D. Among the most frequently identified FS R&D accomplishments was the FIA program, as well as FS R&D’s work related to wildland fire, invasive species, and vegetation management. More broadly, many stakeholders cited FS R&D’s overall scientific credibility as a significant asset. Nevertheless, several stakeholders identified areas that, in their opinion, required greater attention by FS R&D. Forest Inventory and Analysis program. One of the accomplishments most frequently identified was the FIA program, which has provided decades of data used to assess the status, trends, and sustainability of America’s forests. To date, FIA data collection has been initiated for each state, most recently for Hawaii, Nevada, and Wyoming. According to several stakeholders, these data have been fundamental to understanding the nature and changing condition of forest resources, which in turn has helped federal, state, and local governments, as well as others, make informed decisions about land use and management. A few stakeholders added that FIA data have been improving and are more useful today than in the recent past because they are more comprehensive and include state- specific summaries and interpretations, which helps, for example, state foresters better communicate the information to public officials, land managers, and the public at large. Several stakeholders told us that many state foresters relied on FIA data to prepare reports for State-Wide Assessments and Strategies for Forest Resources, required by the Food, Conservation, and Energy Act of 2008. The assessments are designed to, among other things, identify the conditions and trends of forest resources in the state and threats to those resources. Wildland fire. FS R&D research has also led to a number of accomplishments in the area of wildland fire and fuel management, according to many stakeholders. Some Forest Service officials in National Forest System regional offices noted that FS R&D’s research has helped them understand the role of fire, fire behavior, and how fire can be used as a management tool, including ways to effectively reintroduce fire into ecosystems from which it was excluded for many years. In addition, FS R&D has developed a number of tools that help land managers predict fire’s effects on the landscape, such as potential paths a wildland fire might take, and thus support better decisions on wildland fire response, particularly in communities close to forests. FS R&D has also contributed key accomplishments in the area of smoke management and air quality. For example, one FS R&D official told us that in California, FS R&D work has facilitated forecasting the severity of smoke and effects on air quality due to wildland fire, allowing the California Air Resources Board to warn the public about air quality concerns. Invasive species. Invasive species, including nonnative plants and insects, have become one of the most significant environmental threats facing the nation’s natural resources, costing the public more than $138 billion per year in damage, loss, and control costs, according to FS R&D estimates. Several stakeholders told us that FS R&D work in this area has helped them identify ways to better manage infestations and assess potential or actual damage. For example, an exotic beetle from Asia called the emerald ash borer has, since its arrival in the United States in 2002, killed tens of millions of ash trees in a number of eastern and midwestern states and parts of Canada. FS R&D has done research into the beetle’s life cycle, methods for detecting infestation, and the potential for using native enemies or pathogens to control the beetle biologically. Most significantly, according to one stakeholder, FS R&D developed a model that users, including state foresters, can apply to estimate the efforts and funding needed to most effectively attack this beetle. Some stakeholders also cited FS R&D’s research into the mountain pine beetle, a native species that has caused significant tree mortality in the West recently, as another important accomplishment. Climate change. FS R&D’s climate change research is crucial in helping land managers plan for managing natural resources in the future, according to several stakeholders, who told us that because potential effects of climate change are complex and riddled with uncertainty, land managers are increasingly relying on researchers for new information and tools. One such tool cited by a stakeholder is the Template for Assessing Climate Change Impact and Management Options, a Web-based tool produced in part by FS R&D and intended to help land managers and planners integrate climate change science into land management planning. Vegetation management. Several different types of accomplishments related to vegetation management and restoration were cited by stakeholders as important accomplishments. For example, scientists from the Southern Research Station, along with their research partners, have been contributing to restoration of the American chestnut. According to the American Chestnut Foundation and others, the American chestnut was one of the most important trees in the eastern United States, once occupying about 25 percent of the hardwood canopy in eastern forests, but was virtually eliminated by a nonnative fungus called chestnut blight. FS R&D is contributing to the restoration effort by planting and monitoring plots of blight-resistant American chestnut seedlings. Urban forestry. FS R&D’s efforts in urban forestry, including research on maintaining working forests within urbanizing landscapes and educating the public about the value of public and private forested lands to residents’ quality of life, were also cited by some stakeholders as a major accomplishment of FS R&D. Among other efforts, FS R&D contributed to the development of a software application called i-Tree, which, according to the agency, can help urban communities quantify the benefits provided by community trees in mitigating pollution, managing storm water runoff, and other benefits and can be used to put a dollar value on street trees’ annual environmental and aesthetic benefits. Scientific credibility. Beyond specific accomplishments, many stakeholders cited FS R&D’s overall scientific credibility as a significant asset to the agency. Regardless of the topic, according to these stakeholders, FS R&D’s work—which often rests on decades of research conducted by multiple scientists—is widely viewed as unbiased and scientifically rigorous, which lends weight to land management decisions based on that work. Several stakeholders in the Forest Service’s National Forest System, for example, told us that FS R&D research was often useful in developing and defending complex or controversial agency land management decisions because it was generally viewed as being scientifically sound. Another stakeholder pointed out that 13 FS R&D scientists served on the Nobel Prize-winning Intergovernmental Panel on Climate Change, a mark of those scientists’ proficiency in their fields. Along with accomplishments, stakeholders noted that improvements could be made in several areas—including FIA, wildland fire, and invasive species. They also noted the need for additional research into social sciences related to forest issues. Several stakeholders pointed out that FS R&D could improve FIA by adding increased specificity to the data collection efforts. They said that higher-resolution data collection in more locations, plus more frequent data collection, would help states make better-informed planning decisions. For example, one stakeholder suggested that more-detailed data could help spur job creation and economic development in the emerging alternative energy market by helping potential investors in biomass power plants identify locations of sustainable supplies of woody biomass, which could then help determine the best places to build a new plant or expand an existing plant. Several stakeholders also cited a need for improvements to wildland fire and invasive species research. For example, several stakeholders noted that they would benefit from more assistance in applying the many tools FS R&D has developed to help land managers respond to wildland fire. Other stakeholders told us that increased FS R&D research into methods for controlling or eradicating invasive species—for example, the use of natural predators of invasive species—could help land managers better manage infestations. Several stakeholders told us the agency should focus more attention on social sciences. One stakeholder noted that increasing populations near forests has made it essential that land managers understand the impacts that changing recreation habits can potentially have on these forests. An FS R&D official observed that in addition to understanding the physical science of fire, managers must also understand how the public will react to different fire management choices, particularly where communities are directly affected by those choices. Spending by FS R&D remained relatively flat during fiscal years 2000 through 2009, with a small but growing portion of the agency’s total spending represented by funds received from external sources such as universities and other federal agencies. Trends in spending varied across the research stations, with some experiencing increases and others, decreases. These spending trends have affected FS R&D’s hiring patterns and research activities. Overall, the amount spent by FS R&D—using both Forest Service- appropriated funds as well as resources from external sources such as cooperating agencies and organizations—remained relatively flat during fiscal years 2000 through 2009, with funding from external sources representing a small but growing percentage of total spending. Total nominal spending increased from $276.9 million in fiscal year 2000 to $369.1 million in fiscal year 2009—an average annual increase of 3.2 percent. After adjusting these amounts for inflation, the average annual increase was 0.8 percent. Resources spent using Forest Service appropriations, which constitute the majority of FS R&D spending, increased slightly in nominal terms but remained relatively flat in inflation-adjusted terms from fiscal year 2000 through fiscal year 2009 (see fig. 2). Spending increased from $261.9 million in fiscal year 2000 to $337.9 million in fiscal year 2009—an average annual increase of 2.9 percent. After these amounts were adjusted for inflation, the average annual increase was 0.4 percent. Spending may be increasing more quickly for FIA than for other types of research, however. Although FS R&D’s appropriation comes through a single appropriation account for “forest and rangeland research,” since fiscal year 2003 the annual appropriation has designated a portion of these funds for FIA, and FIA’s portion of this enacted budget authority has been growing at a faster rate than FS R&D appropriations as a whole. The enacted budget authority for FIA increased from $31.7 million in fiscal year 2000 to $60.8 million in fiscal year 2009—an average annual increase of 7.5 percent, or about 4.9 percent when adjusted for inflation. Although the remaining portion of the FS R&D budget authority increased from $170 million to $267 million during the same time, it grew only about half as quickly, with an average annual increase of 2.6 percent when adjusted for inflation. Across the research stations, spending of Forest Service appropriations generally increased in nominal terms, with six of the seven stations showing an increase from fiscal year 2000 through fiscal year 2009. When adjusted for inflation, however, spending decreased at three stations: the International Institute of Tropical Forestry and the Pacific Northwest and Southern research stations. The Forest Products Laboratory, in contrast, experienced the most growth in spending over this time (see fig. 3). The amounts spent by each station varied from year to year, however, and even those stations that showed an overall decline in spending experienced some year-to-year increases during the decade. For example, although the Southern Research Station experienced an overall decrease in spending over the past decade, year-to-year spending showed an uneven pattern; after a sharp decline from fiscal year 2000 through fiscal year 2001, spending increased in each of the next 3 fiscal years before declining again (see app. III for more detail about year-to-year spending for each station). Across the agency, personnel costs—that is, salaries and benefits— constituted the largest percentage of resources spent using Forest Service appropriations during this time, about 61 percent of spending, with yearly percentages varying from 58 percent to 67 percent. Across the research stations, the average percentage of resources spent on personnel costs varied from 50 percent at the International Institute of Tropical Forestry to 65 percent at the Northern Research Station. The second largest category of spending across FS R&D was grants and agreements, through which FS R&D provides funds for partners, such as universities, to conduct research. Spending on such grants and agreements increased from 14 percent of spending in fiscal year 2000 to 21 percent in fiscal year 2009—in line with FS R&D’s fiscal year 2012 goal (articulated in its 2008-2012 strategic plan) to devote 20 percent of its appropriated funds to such “extramural” research. Although FS R&D spending using external sources of funding was much smaller than from FS R&D appropriations, spending from these sources increased at a faster pace over the last decade. Multiple organizations provide external support to FS R&D, including other federal agencies, states, industry, nonprofit organizations, universities, and others. Consistent with FS R&D’s fiscal year 2012 goal (also contained in its 2008- 2012 strategic plan) to obtain a portion of its funding from external sources, resources spent using external sources increased from $15 million in fiscal year 2000 to $31.3 million in fiscal year 2009—an average annual increase of 8.5 percent, or 6.0 percent after adjusting for inflation (see fig. 4). As a proportion of the FS R&D total, spending using external sources increased from 5.4 percent in fiscal year 2000 to 8.5 percent in fiscal year 2009. Officials told us that the amount of external funding the agency receives has depended on several factors, including the capacity of partners to provide funding and the ability of FS R&D scientists to successfully compete for such funding. Across the research stations, spending using external sources generally grew from fiscal year 2000 through fiscal year 2009, with average annual growth ranging from 0.5 percent at the Southern Research Station to 10.7 percent at the Northern Research Station, after adjusting for inflation. The exception to this trend was the Pacific Northwest Research Station, where spending using external sources declined 4.2 percent each year, on average, after adjusting for inflation. But these overall figures mask substantial year-to-year variation in the stations’ spending of external funds. For example, at the Forest Products Laboratory, spending of external funds decreased about 35 percent from fiscal year 2006 to fiscal year 2007 but then increased more than 60 percent the following year (see app. III for more detail). Unlike spending using Forest Service appropriations, most funding from external sources was spent on grants and agreements, which increased from 22.9 percent of such spending in fiscal year 2000 to 55.3 percent in fiscal year 2009. The second-largest amount was spent on personnel costs, which decreased from about 32.5 percent in fiscal year 2000 to 21.7 percent in fiscal year 2009. Regarding the sources of external funding, from fiscal years 2005 through 2009, the only period for which detailed data were available, the largest amounts of external support for FS R&D came from other federal agencies, followed by states and industry (see fig. 5). Support from other federal agencies increased from $19.7 million in 2005 to $24.2 million in 2009, or 2.7 percent after accounting for inflation. The Department of Defense and the Department of the Interior—both departments with land management responsibilities—provided FS R&D with the most support among the federal agencies. Support to FS R&D provided by some nonfederal sources, such as industry and universities, also increased over this time. In contrast, support from nonprofit organizations and states declined after accounting for inflation. Additional information about external funding also appears in appendix III. In addition to financial support, FS R&D has also received various forms of in-kind support from project collaborators, some of which have allowed FS R&D to share equipment, personnel, or computing capacity. For example, a number of FS R&D facilities have been co-located with universities, which has generally reduced the amount the agency needs to spend to rent a facility or purchase additional research equipment. In addition, the Northern and Pacific Northwest research stations have also used joint FS R&D and university faculty appointments to foster stronger relationships with significant collaborators and sources of in-kind support. At the Forest Products Laboratory, officials told us that industry partners have provided multiple types of in-kind support, including materials, such as wood chips or logs that the laboratory uses in its experiments. Because a large percentage of each research station’s budget is related to personnel costs, several stations have taken steps to reduce their staffing levels or change the type of employees they hire in response to the agency’s flat spending trends. Officials at most research stations reported that when a permanent employee retires or leaves FS R&D, officials may not refill the vacant position with another permanent employee, instead leaving it vacant or filling the position with a temporary or term employee. Some research stations have gone further, offering buyouts to employees as a way to control personnel spending. FS R&D officials told us that replacing research scientists, in particular, requires a substantial commitment of resources because the combination of their salaries and the operating expenses associated with their research is higher than that of other staff positions. Several officials also told us that, in some cases, because of funding constraints, they did not refill some positions held by technicians—staff who typically conduct laboratory or field research work. Our analysis of agency data shows that FS R&D spending on personnel has remained flat, and that the number of permanent employees at FS R&D has declined. From fiscal year 2006 through fiscal year 2009, the number of permanent FS R&D employees declined from 2,058 to 1,935—an average annual decrease of 2 percent (see table 1). According to officials, at least part of this decline can be attributed to a reduction in administrative and clerical positions after the centralization of Forest Service business services beginning in 2005. The number of research scientists declined twice as fast as the overall number of permanent employees, from about 495 in fiscal year 2006 to about 437 in fiscal year 2009, an average annual decrease of 4.1 percent. The decline in research scientists is part of a larger decline in the number of research scientists at FS R&D over the past several decades, as their numbers have decreased from approximately 1,000 in 1985. Term employees likewise declined during fiscal years 2006 through 2009, from 302 to 164, while the number of temporary employees fluctuated between 504 and 580 over that time. Across research stations, the number of permanent employees declined at five stations and remained relatively unchanged for the remaining two stations from fiscal years 2006 through 2009. (See app. III for more information about employment trends at the research stations.) Some officials and scientists we spoke with were concerned that these staffing trends have reduced FS R&D’s capacity to conduct research because fewer permanent scientists and technicians remain to carry out the work; they were also worried about FS R&D’s ability to maintain its long-term research because of increased reliance on term and temporary employees. On the other hand, some FS R&D officials pointed out advantages to hiring term and temporary employees. For example, a particular research project may require specific expertise only for a finite amount of time, and hiring a term employee to fill this need allows the research station to harness that expertise without committing to maintaining it indefinitely—which is especially important if the expertise is unlikely to be needed for future projects. By not permanently filling scientist or technician positions, officials told us the agency retained the financial flexibility needed to conduct new research and maintain existing research platforms, including facilities, equipment, long-term plots, and other needed research elements. Regarding external sources of funding, several FS R&D officials noted that increasing use of this funding, while a small portion of overall FS R&D spending, can have both positive and negative impacts. Several scientists and officials reported that external sources of support allowed them to expand the scope of their research by initiating work on additional research topics they would not otherwise have had the funds to pursue or to accelerate existing work—“to run where we would have walked instead,” in the words of one scientist. Some scientists also noted that, given the increasing demands on FS R&D appropriated funds, they have increasingly used external funds to help pay for research-related operating expenses. In contrast, some scientists noted potential drawbacks in relying on external funding. Some FS R&D officials and scientists commented that external funding is generally available to support projects that span no more than a few years, and increasing reliance on external funding could therefore lead to a shift in FS R&D’s balance between basic and applied research if more of its scientists’ time were spent answering shorter- rather than longer-term research questions. Others, however, told us that pursuing external funding has helped ensure that FS R&D works on research questions relevant to stakeholders’ needs, because external funding tends to indicate the priorities of the broader research and user community. Furthermore, some told us that it can be time-consuming to identify and apply for such funding and that time spent on these tasks reduces the amount of time available for research. FS R&D has recently taken steps to improve its ability to fulfill its mission in a number of areas, including science delivery, research relevance, organizational structure, research funding allocation, research agenda setting, and coordination with other federal research agencies. Despite the agency’s efforts, however, FS R&D officials and stakeholders identified challenges associated with these areas, particularly with regard to FS R&D’s ability to deliver the results of its research. In addition, agency officials identified several other challenges, which impede the agency’s ability to carry out its day-to-day work. FS R&D has worked to create a more formal system for delivering the results of its research, known as science delivery, at multiple levels within the agency. FS R&D officials told us that at the national level, FS R&D in 2005 created a National Science Application Team and the position of National Science Application Coordinator, both focused on science delivery. According to officials, the team aims primarily at facilitating cross-station communication and identifying areas for strengthening science application activities throughout FS R&D. The team includes representatives from each station, as well as headquarters personnel. To date, according to an FS R&D official, the team has focused on identifying opportunities to collaborate across research stations so as to leverage each station’s strengths. In addition to these actions, the Forest Service’s 2007-2012 strategic plan recognized the importance of science delivery by including it as one of seven agency goals. At the research station level, according to agency officials, science delivery positions have been or are being established at each station, although science delivery has evolved differently at each station and stations vary in the way they provide science delivery. For example, an FS R&D official told us the Pacific Northwest Research Station in the 1990s recognized the need for increased emphasis on science delivery to a broader audience, in part because of the Northwest Forest Plan, a highly controversial federal land management planning effort that required rigorous science to support decisions involving old-growth forests and threatened species. The station subsequently created a Focused Science Delivery program, whose mission is to enhance the usefulness of scientific information, including synthesizing information from a wide range of disciplines and delivering it to clients in clear and accessible formats. Likewise, the Rocky Mountain Research Station created the Science Application and Integration program, which is dedicated to making scientific information and research applicable to natural resource management and planning. The station is also working with partners to maximize efforts to address land managers’ needs. On the other hand, science delivery at the Forest Products Laboratory has been emphasized since its creation, according to laboratory officials, mainly because much of the focus of the laboratory’s research is on applied products, such as new wood materials for the housing sector. Despite these efforts, officials and scientists throughout FS R&D, as well as numerous stakeholders, told us that FS R&D has not placed sufficient emphasis on science delivery. Some noted that, even with the agency’s recent efforts, the agency does not have a consistent approach to science delivery, often leaving it up to individual scientists, who vary in the amount of time and effort they devote to it. Without effective delivery of FS R&D’s research results, land managers, policymakers, and others may be unable to promptly and effectively use the knowledge, data, and tools FS R&D produces, and FS R&D cannot ensure that its research is being used to its greatest potential. In part, according to a senior FS R&D official we spoke with, the struggle to provide adequate science delivery stems from the contradictions inherent in FS R&D’s status as a research organization within a land management agency. As a result, FS R&D must balance the limited time and resources available to its researchers between, on the one hand, basic research and the resulting publications in peer-reviewed journals and, on the other hand, delivering the results of that research and making sure it is useful and understandable to end users. Many stakeholders told us that although publishing research in peer- reviewed journals is important for the credibility of scientists and their research, delivery of results through other mechanisms—such as summary findings, workshops, or one-on-one interactions between scientists and users of FS R&D-developed work—can often be more useful to land managers and decision makers. Nevertheless, many stakeholders and numerous FS R&D researchers and officials told us the agency values publications in peer-reviewed journals over other science-delivery mechanisms. In large part, according to several scientists and others we spoke with, this view prevails because the system for appraising individual researchers’ performance continues to emphasize publication in peer-reviewed journals. To evaluate researchers, FS R&D uses the Office of Personnel Management’s “research grade evaluation guide” to measure individual researchers’ performance in what is often referred to as the paneling process. The guide was revised in 2006 to, among other things, place greater emphasis on communicating research results to users through mechanisms other than peer-reviewed journals (such as summary findings or workshops) as part of the measure of scientists’ work. Officials told us that, consistent with these revisions, FS R&D made an effort to train panel reviewers to place greater emphasis on these other forms of science delivery as a component of their performance. Despite these changes, several FS R&D officials and stakeholders told us that the emphasis placed on peer-reviewed journals, compared with other forms of science delivery, during the paneling process varies among panels and depends on the perspective of the panel chairperson; they also said that many panelists continue to emphasize peer-reviewed journals over other forms of science delivery. Further complicating the science delivery issue is the potential overlap in science delivery roles between FS R&D and State and Private Forestry, another Forest Service program. State and Private Forestry is authorized by the Cooperative Forestry Assistance Act of 1978 to carry out a program of technology implementation to “ensure that new technology is introduced, new information is integrated into existing technology, and forest resources research findings are promptly made available to state forestry personnel, private forest landowners and managers, vendors, forest operators, wood processors, public agencies, and individuals.” State and Private Forestry maintains staff across the country to assist in this mission, some of whom are closely associated with FS R&D’s work. Because both FS R&D and State and Private Forestry have missions to carry out science delivery, and because their activities can be closely intermingled, the programs’ science delivery responsibilities have not always been clearly delineated, according to officials, highlighting the need for both programs to work closely together to minimize duplication and stretch limited resources by taking advantage of available expertise across the programs. The need for greater clarity about FS R&D’s science delivery role in relation to State and Private Forestry is consistent with the results of the Forest Service’s own 2009 assessment of science delivery within the agency, which highlights deficiencies in this area, such as a lack of coordination among those conducting research and those delivering research information and tools, and provides suggestions for improvement, including greater coordination of efforts between FS R&D and State and Private Forestry. FS R&D officials told us, however, that the agency has not taken steps to implement the report’s recommendations and has not established time frames for doing so, nor has the agency otherwise assessed the effectiveness of its efforts to improve science delivery, including the creation of the National Science Application Team and its changes to science delivery at the research stations. It is important to note, however, that while many FS R&D officials and stakeholders suggested the need for greater attention to science delivery, many also emphasized the value of FS R&D’s basic and long-term research and cautioned that too great a shift in resources from basic research to science delivery would also be inappropriate. Much of the applied research and science delivery relevant to current issues rests on the findings of basic, long-term research, so it is important to continue investing resources in such research, according to these stakeholders. For example, one State and Private Forestry official we interviewed told us that he found the wildland fire-related tools and assessments developed by FS R&D to be very useful, but he also emphasized the need for FS R&D to continue to invest resources in core fire science, which should not be driven by short-term needs, to maintain the agency’s ability to develop such tools. FS R&D has implemented new approaches to determine the relevance of its research work to customers and to assess its quality and performance, including customer surveys, external peer reviews of the seven strategic program areas, and an increased use of narrative descriptions to describe its accomplishments. In 2006, FS R&D began using a customer satisfaction survey to help identify areas where customers believed it excelled or, conversely, needed improvement. Conducted periodically, the survey allows officials to assess overall customer satisfaction with FS R&D over time in comparison with other federal research agencies. According to survey results provided to us by FS R&D, the 2009 survey resulted in a 75-point score, which is in line with scores for other federal government providers of information, which typically score in the 70-point range, and was an improvement over FS R&D’s 2006 score of 72. The survey also compares customer satisfaction across strategic program areas and research stations in a variety of categories, including accessibility of data, accuracy of products, and relevance and quality of work. FS R&D officials told us they regard the information and recommendations provided by the survey as useful for making better-informed determinations about the areas of work that require greatest improvement and are likely to have the greatest impact. FS R&D also conducts external peer reviews that assess the relevance, quality, and performance of research conducted within each of its seven strategic program areas, an effort that began in 2006. The relevance category, for example, includes assessing the extent to which each strategic program area has clear societal benefits, produces products that are being used and have potential impacts, seeks user input in setting the agenda, and is not inappropriately duplicative. The extent to which the reviews adequately measure performance in these areas, however, was questioned by several external reviewers as well as some agency officials. Although the strategic program areas are purposefully broad, this breadth of research coverage means that the work conducted under one area may also be relevant to another, complicating the review process. For example, it is difficult to fully evaluate how well Water, Air, and Soils is performing when areas of science relevant to that program area, such as the effects of smoke on air quality, may be evaluated under Wildland Fire. Because different external panels are assembled for each of the various peer reviews, it is hard to know where—or if—all areas of research were evaluated. Another concern on the part of some stakeholders was the degree to which end users provided feedback about the various strategic program areas and the implications of selecting certain end users for, or excluding them from, the peer-review process. Given that the strategic program areas and the review process are relatively new, FS R&D is currently evaluating the adequacy of such reviews in measuring performance, as well as ways in which the process might be improved. Although FS R&D measures its performance in part with quantitative measures, such as number of publications and, in certain science areas, the numbers of tools developed, officials explained that it can be difficult to quantify many of its research accomplishments, such as FS R&D’s research impact on preventing the outbreak of, for example, an invasive pest. To help overcome this difficulty, FS R&D communicates its accomplishments in reports through narrative descriptions of the scientific and societal benefits of its work. In addition, researchers may work for years on a particular problem, which may not generate immediate, measurable outcomes but, rather, a valuable foundation for future accomplishments. For example, the information FS R&D currently contributes to climate research is based on data that have been collected over several decades. Within the past few decades, the physical and organizational structures of FS R&D’s research stations have also changed significantly. First, the makeup of the research stations changed, as some research stations merged and one split into two stations. Second, the research stations reorganized their work units into science themes or areas of research that are broader than in the past, to foster a more multidisciplinary and integrated approach to research. Three of the present research stations resulted from merging previously existing stations, done in part to reduce overhead and administrative costs, as well as to improve customer service and make research results more accessible and useful. The Northern Research Station, for example, is the product of the agency’s 2006 consolidation of the former Northeastern and North Central research stations; the Southern Research Station, formed in 1995, consists of the former Southeastern Forest Experiment Station and Southern Forest Experiment Station; and the Rocky Mountain Research Station, formed in 1997, consists of the former Intermountain Research Station and Rocky Mountain Forest and Range Experiment Station. According to FS R&D officials and documents, these mergers allowed related research to come under a single management team, while also allowing the stations to make better use of smaller administrative staffs; provided facilities for large-scale, multidisciplinary studies; and facilitated integrated, landscape-scale research programs. In contrast, the International Institute of Tropical Forestry, formerly a unit of the Southern Forest Experiment Station, was made an independent institute with an expanded mission in 1993. In addition, beginning in the late 1990s, research stations reorganized their work into broad science themes or areas of research. Before that time, each research station was structured around discrete research work units, which were geographically based and covered specialized scientific issues. About 140 research work units existed across FS R&D, according to a FS R&D headquarters official, each of which included one to five scientists to carry out a narrow scope of work. As issues the scientists were studying grew in complexity, according to this official, more integration among disciplines was required to answer research questions. Officials at one research station, acknowledging their more complex research needs, observed that having narrowly focused research work units was no longer appropriate for the agency. In response, FS R&D decided to consolidate the units into broader “programs,” which officials told us was to foster a multidisciplinary, integrated approach to research and reduce the time scientists spent on administrative tasks. While the research stations were not required to move from the research work unit model to the program model, the Deputy Chief of FS R&D encouraged them to do so, and all stations have now adopted the new approach. As a result, some research stations have undertaken major realignments of their units. For example, the Pacific Northwest Research Station has de-emphasized some traditional scientific areas while emphasizing new ones, moving from 26 research work units to six programs: Ecological Process and Function; Focused Science Delivery; Goods, Services, and Values; Land and Watershed Management; Resource Monitoring and Assessment; and Threat Characterization and Management. The consolidation of research work units produced a number of benefits, according to FS R&D officials we spoke with. First, the consolidation allowed FS R&D to respond to increasingly complex research needs by adopting a more multidisciplinary and integrated approach. Second, according to officials, the consolidation of units shifted control back to research station management, allowing managers to be more strategic in setting research priorities because those priorities were determined centrally by the stations rather than individually by the units. For example, according to officials at the Rocky Mountain Research Station, in the past when employees resigned or retired, the research work units were permitted to directly refill the position. Now, the management team at the research station decides which vacancies to refill, including whether to shift vacant positions to other program areas that are higher priorities for the station. And third, the consolidation allowed FS R&D to use its resources more efficiently, according to agency officials, because the consolidation purged some traditional lines of research that FS R&D officials said were no longer productive or relevant. Consolidation also reduced overhead costs for FS R&D, as well as the time scientists spend on administration, according to officials, because it allowed FS R&D to consolidate scientists into fewer facilities. For example, as part of its consolidation, the Southern Research Station closed one of its laboratories and was also able to move two employees who were using Agricultural Research Service space into space owned by FS R&D. As a result, FS R&D was able to cease paying overhead costs for its use of the previous space. Despite these benefits, some officials described disadvantages to consolidating research units. For example, officials from the Southern Research Station said that consolidation decreased the station’s on-the- ground presence in some places covered by the station, including Kentucky and Tennessee. Decreasing a station’s presence may limit its partnerships with nonfederal entities, such as with industry, because local relationships can be more difficult to develop. Although these recent changes may enhance FS R&D’s work within stations, the decentralized nature of the agency’s organizational structure emphasizes the need for collaboration across stations, and concerns have been raised about whether such collaboration could be improved. In particular, the external peer reviews of FS R&D’s strategic program areas identified concerns about the extent to which research is being effectively coordinated across the research stations. For example, one peer review described a lack of coordination among research stations on wildland fire research, while another review found a lack of coordination in some areas of climate change research. On the other hand, while these concerns were echoed by a number of agency officials we talked with, other FS R&D officials, as well as agency stakeholders, noted a number of accomplishments that have come out of cross-station collaboration, such as i-Tree and the Westwide Climate Change Resource Center Web site, developed by the Pacific Northwest, Pacific Southwest, and Rocky Mountain research stations. Along with consolidating their research programs, some research stations have also been revamping the process they use for allocating resources among programs and projects. At the Rocky Mountain Research Station, for example, officials told us that the new process begins when each program, laboratory, and experimental forest provides the station with its initial funding request. Subsequently, on the basis of these funding requests, as well as discussions about what programs or projects might be expanded or cut, the station’s leadership team determines final allocations to each program, laboratory, and experimental forest. Later, a midyear review takes place to identify programs or projects that are unlikely to use all their funds; such funds are subsequently reallocated through a competitive process geared toward the station’s priorities. Station officials told us that this new budgeting process better positions them to respond to emerging needs and priorities and helps clarify what the station’s research dollars are funding. Similarly, the Pacific Southwest Research Station is implementing a new allocation process based on the one used by the Rocky Mountain station. In the past, according to station officials, each research work unit received a certain percentage of the station’s total allocation, and this percentage did not change from year to year. By keeping the percentages fixed, these officials told us, the station did not have the needed flexibility to make funding changes in response to changing research priorities. The new process, according to a station official, allows managers to make more strategic and better-informed decisions. Other stations’ processes likewise are aimed at ensuring that research dollars are directed to the highest-priority research areas, rather than simply continuing previous funding patterns. At the Pacific Northwest Research Station, officials told us they use four factors to guide resource allocations so they can balance the need for basic science with emerging research areas. The first factor is the period of science delivery: the station allocates about 40 percent of its resources to research expected to deliver knowledge and tools within 1 to 3 years. Second, officials consider the relevance of each research program or project and its broader applicability; third, the regional significance of the research; and fourth, the extent to which a program or project is in an emerging growth area. At the Forest Products Laboratory, officials told us that funding decisions are based largely on the research needs identified by the station’s scientists and assistant directors, who meet to discuss research needs and determine where to make trade-offs between research areas. Some officials also noted that FS R&D leverages its staff resources by considering resource needs and vacancies across stations and that applying resources across geographic boundaries—or even permanently transferring researchers to locations where they can be better used— allows the agency to apply its expertise quickly and efficiently. By way of example, an official at the Pacific Southwest Research Station told us that a bark beetle researcher at the station spends as much time in other states experiencing bark beetle outbreaks as he does in California, where the station is located, and that, even though these other states are covered by other research stations, it is more efficient to meet this research need through existing expertise than to hire scientists in these other locations. Because FS R&D leverages its resources across geographic boundaries, according to officials, the location of staffing resources around the country does not limit the agency’s ability to respond to research needs even in areas where FS R&D staff are not permanently located. FS R&D has been renewing its efforts to seek and obtain input on research agendas from stakeholders—including federal and state land managers, universities, and industry—by, for example, conducting outreach to identify stakeholders’ research needs and soliciting their input before undertaking particular research efforts. Within the last several years, FS R&D has participated in several nationwide, large-scale efforts to identify research priorities related to forestry. For example, beginning in 2005, FS R&D participated in a series of workshops as part of the Forest Service Outlook Project, aimed at developing a long-term research agenda in collaboration with the broad forestry community, including federal, state, and local government agencies; the business community; nongovernmental organizations; and academic institutions. Also in 2005, officials from FS R&D participated in creating the Forest Products Industry Technology Roadmap, a report aimed at helping reinvent and reinvigorate the U.S.-based forest products industry, including the role of FS R&D research in doing so. Other, program-specific efforts exist as well; for example, officials pointed out that the FIA program holds annual meetings with regional and national user groups on the program’s implementation. Many stakeholders we interviewed told us that they meet regularly with research station directors to discuss research priorities and research progress and that, particularly over the last 5 to 6 years, their relationships with FS R&D officials and researchers have continued to improve. For example, one stakeholder told us that the Rocky Mountain Research Station Director holds quarterly meetings with the Regional Foresters of the four National Forest System regions covered by the station to learn more about their research needs. The same Station Director recently held a needs assessment meeting to solicit input from foresters at national forests, as well as state foresters and research station scientists, about what they perceive to be gaps in research. In addition, several stakeholders told us that FS R&D researchers are generally willing to take stakeholder interests into account when implementing research activities, and some pointed out instances in which researchers adapted their research to address stakeholder concerns. For example, one stakeholder noted that FS R&D researchers at the Silas Little Experimental Forest in New Jersey added a component to their work in response to state forester concerns about loss of canopy cover and fire impact resulting from gypsy moths, an issue of particular concern for northeastern foresters. In another example, a western stakeholder we interviewed told us that land managers from the National Forest System met with an FS R&D researcher studying the locations of, and reasons for the decline in, bull trout, a federally listed threatened species. The land managers wanted information about specific aspects of bull trout habitat that the researcher had not initially included in his research plan, but, as a result of the meeting, the researcher incorporated these additional aspects into his study, thereby increasing its relevance. Several stakeholders also mentioned that regional forums, such as the Western Forestry Leadership Council—a partnership between state and federal government forestry leaders in which FS R&D officials and scientists interact directly with state foresters in the West—were effective for discussing both research priorities and work under way. Despite strong relationships and multiple opportunities to provide input, however, several stakeholders we interviewed believed that more could be done to increase end-user input in setting research agendas. Some stakeholders told us they did not always have sufficient opportunity to voice their research interests and suggested that a more systematic approach to communication with FS R&D was needed to ensure their input was considered. According to one stakeholder, private landowners may have fewer opportunities to provide input on research agendas because conferences where research agendas are discussed may be too expensive for them to attend or because they are not made aware of such opportunities to participate. Similarly, despite FS R&D efforts to solicit university input, the university representatives we interviewed told us that FS R&D should make a more concerted effort to involve academia in FS R&D’s early planning efforts. Although considering the priority needs of stakeholders is important, FS R&D officials and researchers must also maintain discretion to prioritize research they consider important even in the face of stakeholder disagreement. Officials at the Forest Products Laboratory, for example, told us that stakeholder input into the laboratory’s work is reviewed annually through a peer-review process conducted by multiple end users— including other research stations, universities, and industry—to ensure the laboratory is working on relevant science and evaluate the work it considers for the future. Officials told us that some panelists criticized the laboratory over the past 20 years for conducting research on nontoxic wood preservatives to serve as alternatives to the widespread use of traditional wood preservatives, stating that such research was unnecessary. Because of concerns about traditional wood preservatives’ potential harm to human health and the environment, however, scientists and managers at the laboratory felt that research into alternatives was important. As a result, they continued to pursue this research despite end-user suggestions, which officials told us proved to be important because the use of the older preservatives is now restricted. FS R&D has emphasized coordination with other federal research agencies at various levels to leverage expertise and resource capacity and set complementary research agendas. For example, current federal interest in using biomass as a reliable source of energy requires integrating various components of research and information unique to several different agencies, such as methods for acquiring a sufficient supply of biomass feedstock and converting this feedstock into energy. Officials we interviewed from other agencies provided a large range of research issues in which FS R&D is currently coordinating with multiple federal agencies or research entities, including bioenergy, climate change, water quality, restoration, and management across landscapes, and many stated that coordination is increasing. For example, one official from USGS noted that as recently as 5 years ago, he was aware of few coordinated efforts across the Forest Service and USGS in the area of water research, but the situation has since changed. At the national level, FS R&D and other agency officials described the coordination undertaken with other federal agencies in a number of ways, including interagency working groups, conferences, and regular meetings. Within the Department of Agriculture, FS R&D broadly coordinates its research with other component research agencies, including the Agricultural Research Service and National Institute of Food and Agriculture, by holding regular meetings to discuss research policy, mutual research interests, and potential areas for coordination. FS R&D also coordinates with agencies outside of the Department of Agriculture, including USGS and the Bureau of Land Management in the Department of the Interior, the Environmental Protection Agency, the Department of Energy, and the National Science Foundation. Current efforts include collaboration with Energy on biomass, USGS on carbon sequestration, and multiple agencies on climate change. Specifically: Biomass. The departments of Agriculture and Energy co-chair a biomass research and development board charged with coordinating programs across federal agencies to promote the use of biofuels and biobased products. The Department of Agriculture has the lead on biomass feedstock research while Energy has the lead on techniques to convert feedstock into fuel, according to FS R&D and other agency officials. Within the Department of Agriculture, FS R&D and the Agricultural Research Service are developing a network of Biomass Research Centers, through which they will coordinate their agencies’ efforts to provide biomass for the biofuels industry. The network will comprise existing Agricultural Research Service and Forest Service facilities and scientists, whose combined efforts, along with partnerships with universities and private companies, are expected to help accelerate the commercial production of biofuels, biopower, and other biobased products. Carbon sequestration. The Energy Independence and Security Act of 2007 directs federal agencies to coordinate on a number of efforts, including an assessment of national capacity for geological sequestration of carbon. Through the act, the Secretary of the Interior was directed to complete this assessment with other federal agencies. The assessment has geological and biological components, according to an official, with FIA data from FS R&D expected to play a substantial role in the assessment. Climate change. FS R&D collaborates with multiple federal agencies on issues related to climate change. For example, FS R&D is involved in the U.S. Global Change Research Program, which coordinates and integrates federal research on changes in the global environment and their implications for society. Thirteen federal departments and agencies participate in the program, including the departments of Commerce, Defense, and Energy, and the National Aeronautics and Space Administration. FS R&D also works directly with USGS on a number of climate change initiatives. For example, USGS is developing eight climate change response centers around the country; the Forest Service is on the steering committee for the centers, and FS R&D and USGS will conduct joint research out of these centers. FS R&D is also involved in a number of interagency efforts at regional and local levels. For example, FS R&D is working with multiple federal agencies in a variety of climate change partnership efforts. One such partnership is the Consortium for Integrated Climate Research in Western Mountains, a network of scientists, resource managers, and policymakers from the Forest Service, the National Oceanic and Atmospheric Administration, USGS, and universities that promotes climate monitoring, research, communication, and decision support in the West. FS R&D is also involved in the Great Basin Resource and Management Partnership, through which FS R&D and a number of other federal agencies, including the Bureau of Land Management, the Fish and Wildlife Service, and USGS, as well as nonfederal entities such as universities and nongovernmental organizations, are working to better link research to management in the Great Basin, considered by some scientists to be one of the most endangered ecoregions in the United States. At the local level, officials told us that in Alaska, scientists from USGS and FS R&D worked together on a joint project to forecast shifts in polar bear populations because of climate change, work influential in the listing of the polar bear as a threatened species under the Endangered Species Act. In the Southern Research Station, the Coweeta Hydrologic Laboratory was designated as a National Science Foundation Long-Term Ecological Research Site in 1980. At this site, FS R&D, the National Science Foundation, and the University of Georgia share facilities, staff, equipment, and funding to coordinate research on rainfall, evaporation, and stream flow. In general, according to many officials from FS R&D and other agencies, FS R&D’s scope of work complements, rather than duplicates, other agencies’ work. For example, while FS R&D and the Agricultural Research Service both do research on plants, FS R&D focuses mainly on trees while the Agricultural Research Service focuses on herbaceous (nonwoody) crops, resulting in minimal overlap, according to officials. Similarly, FS R&D and USGS both conduct water research, but the bulk of FS R&D’s research on water focuses on forest systems and wildland fire, according to officials, while USGS’s water program has more breadth and provides more of a “census report” of water, including information on water supply and quality. The generally complementary, rather than overlapping, nature of research prevails in part because FS R&D’s structure and mission differ from those of other federal agencies conducting research, according to FS R&D and other agency officials we spoke to. Several officials at various agencies told us that FS R&D’s unique position as part of a land management agency gives its work a specific focus that tends not to overlap with the work of other federal research agencies, which are primarily research agencies with no land management responsibilities. FS R&D officials also reported several challenges that impede their ability to conduct their day-to-day research, including computing and information technology, human capital, and other administrative issues. Many FS R&D officials and scientists told us that issues related to computer and information technology impede their ability to carry out their work. FS R&D officials explained that researchers generally require greater computing capacity than most other Forest Service employees; for example, many researchers collect substantial amounts of data and develop and use complex software applications. To understand the specific information technology needs of FS R&D, an official from the Chief Information Office for the Forest Service conducted a review of technical challenges for FS R&D, which began in August 2007 and produced an internal report in January 2009. The report identified a number of “priority issues,” along with recommendations, some of which were also frequently mentioned during our interviews with FS R&D officials. These include insufficient customer service and support, with multiple days needed to resolve routine computer issues; the long technical approval process for researchers to use technology outside current Forest Service information architecture; and insufficient computing capacity, which can require researchers to rely on partners such as universities to store and run FS R&D data and programs. Since the report was issued, the Chief Information Office has taken some steps to address cited issues, and some FS R&D officials told us that information technology support is improving. For example, officials told us that the information office created a customer service representative specifically for FS R&D and is revamping its system for entering requests for technical approval. In addition, officials told us the information office has implemented a pilot project aimed at improving high-end computing capacity. Several FS R&D officials told us that the Forest Service’s hiring process sometimes impedes research. Human resource management was one of the administrative functions the Forest Service centralized, a move that may have contributed to dissatisfaction with the hiring process because research stations no longer have human resource support on site as they did in the past. Many FS R&D officials complained about the length of that process, pointing out that, because the process can take so long, temporary employees may begin work later than anticipated, shortening the time they have to collect data for research projects. In some cases, data can be collected only in certain months of the year; for example, the field season in high alpine areas may be limited to a short period in the summer, exacerbating the effects of hiring delays. In addition, according to officials, the length of the process can sometimes cause research stations to lose good candidates, if those candidates choose another employer who can hire them more quickly. Another issue that FS R&D faces when hiring new employees is that position descriptions are sometimes changed by the Forest Service’s Human Resource Management office because employees there may not understand the unique needs of research stations, according to FS R&D officials. In scientific research, specific qualifications need to be considered in filling research positions. For example, some officials told us that a researcher may need to hire a technician who can mimic certain bird calls and will include that requirement in the position description. Human resource management officials, however, may take the specification out because they think it is superfluous and too specific. Likewise, two research positions with the same title might require different skills or expertise, but, according to officials we interviewed, human resource management officials may not understand the distinctions. Administrative and legal challenges were also cited as hampering FS R&D research. For example, the Paperwork Reduction Act contains review requirements associated with developing surveys, which FS R&D researchers told us are an obstacle to using surveys to obtain information from nonfederal stakeholders. The act prohibits federal agencies from conducting or sponsoring information collection unless they have prior approval from the Office of Management and Budget. The act requires that information collection be approved by the office when facts or opinions are solicited from 10 or more people, including through surveys, questionnaires, and focus groups. FS R&D officials told us that this process is long and arduous—noting that it can take 1 to 2 years to get surveys approved—which can prevent researchers from obtaining timely information and sometimes dissuade them from administering surveys to nonfederal stakeholders so as to avoid the process entirely. Consequently, these researchers rely more heavily on federal stakeholders to obtain input, use secondary data that already exist, or depend on external partners to collect information for them. The requirements associated with the act affect social science in particular, according to officials, because social scientists tend to rely more heavily on data developed through surveys and questionnaires than do scientists from other disciplines. An additional legal and administrative challenge noted by FS R&D officials is that the agency is restricted from directly applying for certain funding sources. Under the National Science Foundation’s grant policy, this agency does not normally support research or education activities by scientists, engineers, or educators employed by other federal agencies. Accordingly, FS R&D does not apply for National Science Foundation grants (and some other grants) as the principal investigator and funding recipient. Rather, FS R&D must work with a nonfederal entity (e.g., a university) that applies for this funding, meaning that the nonfederal entity becomes the principal investigator and funding recipient. Some officials believed these grants should be open to the entire science community and noted that funding FS R&D directly may be more efficient because FS R&D researchers may have expertise in certain areas, as well as the ability to maintain long-term research. The breadth of the research carried out by FS R&D, and the value placed on that work by the many who use it, reflects the agency’s efforts to produce high-quality scientific information and tools to help manage our nation’s forests and rangelands. This research is likely to be even more important in the future, as a complex web of increasing stresses on ecosystems crisscrossing multiple ownership boundaries tests the ability of land managers, policymakers, and others to respond. FS R&D has positioned itself to respond to these stresses, as evidenced by its research into climate change, wildland fire, invasive species, and other topics of immediate interest, by the steps it has taken to help ensure its research is relevant, and by its emphasis on cross-cutting research that spans multiple issues, ecological settings, research partners, and customers. But research is only part of FS R&D’s mission, and the ultimate success of the research program depends on effective ways to deliver the resulting knowledge and technology. Recognition is growing on the part of FS R&D management that more emphasis needs to be placed on this process, as shown by the steps taken to (1) create the National Science Application Team, (2) increase emphasis on science delivery at the research station level, and (3) commission a science delivery review in 2009. Nevertheless, the agency has not fully assessed the effectiveness of its efforts to improve science delivery, which remains a largely ad hoc process that is often subject to the availability and interests of individual scientists. Part of this unevenness arises because individual performance assessments emphasize research and science delivery through peer-reviewed publications more than other methods of science delivery that often convey research results and the use of those results to broader audiences. Without assessing the adequacy of steps taken to improve the agency’s science delivery efforts— and without ensuring that individual performance assessments appropriately value and reward these other methods of science delivery— the benefits of FS R&D’s extensive research efforts may not be fully realized. To maintain and strengthen the science delivery role of FS R&D and help the agency capitalize on the steps it has taken in this area, we recommend that the Secretary of Agriculture direct the Chief of the Forest Service to take the following two actions: Assess the effectiveness of recent steps FS R&D has taken to improve science delivery from FS R&D to land managers and other stakeholders, including the extent to which these steps have helped ensure that FS R&D’s work is disseminated beyond the agency and communicated to its broad range of potential stakeholders. In assessing the effectiveness of these steps, the Chief should consider the recommendations of the Forest Service’s 2009 assessment of science delivery. Take steps to ensure that individual performance assessments better balance the various types of science delivery activities. We provided a draft of this report to the Forest Service for comment. The Forest Service agreed with our findings and recommendations, and noted several actions that it intends to take to improve science delivery. In particular, the agency will begin to assess the effectiveness of its recent steps to improve science delivery and commit additional resources to strengthen science delivery; it will amend its guidance for, and update its training on, holding evaluation panels for research scientists so that science delivery receives more emphasis; and it will continue to recognize and provide incentives for science delivery activities. The agency noted, however, that its flexibility to modify its approach to these evaluation panels is limited because it must follow Office of Personnel Management regulations and policies. The Forest Service’s written comments are reproduced in appendix IV. Unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Agriculture, the Chief of the Forest Service, and other interested parties. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Our objectives were to identify (1) the scope of research and development carried out by Forest Service Research and Development (FS R&D) and some of its resulting accomplishments; (2) trends in resources used in performing FS R&D work and the effects of those trends on its research efforts and priorities; and (3) recent steps FS R&D has taken to improve its ability to fulfill its mission, and challenges it faces in doing so. To obtain information on the scope of FS R&D’s work and its accomplishments, we reviewed relevant laws, regulations, guidance, strategic plans, performance reviews, and historical documents and interviewed FS R&D officials at each of the seven research stations and the Washington Office. We visited five research stations in person (the Northern, Pacific Northwest, Pacific Southwest, Rocky Mountain, and Southern research stations) and interviewed officials from the other two research stations by telephone. Within each of the research stations, we interviewed a variety of officials, including the station directors, budget officers, human resource management officials, scientists, and others. At the Washington Office, we interviewed the Forest Service’s Deputy Chief of Research and Development, the directors of FS R&D’s four major science areas, and the acting and former National Science Application Team coordinators. To obtain stakeholders’ views about FS R&D’s activities and accomplishments, we conducted semistructured interviews of National Forest System and State and Private Forestry officials from the Washington Office and all nine Forest Service regions, including each Regional Forester or designee, as well as nonagency stakeholders representing a variety of interests such as industry, academia, and others. These stakeholders included the American Forest and Paper Association, the National Association of University Forest Resource Programs, the National Association of State Foresters, the National Woodland Owners Association, and others. To identify trends in resources used by FS R&D and the effects of those trends on research efforts and priorities, we obtained and analyzed spending and personnel data and interviewed scientists and officials at its research stations and the Washington Office. To identify spending trends for FS R&D, we obtained outlay data for fiscal years 2000 through 2009 from the Department of Agriculture’s Foundation Financial Information System, including data on spending using both Forest Service appropriations and external funding. We analyzed these outlays by spending category (i.e., personnel, grants and agreements, training) for FS R&D as a whole and for each of the research stations and the Washington Office. To identify the sources of external support, as well as total external funding and the number of projects supported, we obtained and analyzed data from I-Web, a Forest Service database used to track agency agreements. Because I-Web was established in 2005, we were able to report detailed information about external support only for fiscal years 2005 through 2009. We analyzed outlay and external support data in both nominal (actual) and constant (adjusted for inflation) terms. Adjusting nominal dollars to constant dollars allows the comparison of purchasing power across fiscal years. To adjust for inflation, we used the gross domestic product price index with 2009 as the base year. To identify effects of resource trends on FS R&D’s work, we interviewed scientists and officials at the research stations about these trends and how they have affected research efforts and priorities. To corroborate officials’ statements about their hiring practices and staffing levels, we analyzed the Department of Agriculture’s National Finance Center data on permanent, temporary, and term employees provided to us by the FS R&D Washington Office for fiscal years 2006 through 2009; data from previous fiscal years were not available for analysis. We assessed the reliability of the spending, funding, and personnel data we used in our report by reviewing the methods of data collection and entry for these databases and determined that the data were sufficiently reliable to use in this report. Finally, to identify steps FS R&D has taken to improve its ability to fulfill its mission and challenges it has faced in doing so, we reviewed relevant laws, regulations, guidance, strategic plans, performance measures, and recent research capacity and program assessments. We also relied on our interviews with FS R&D officials at the research stations and the Washington Office, and interviewed officials from the Forest Service’s Chief Information Office to learn about FS R&D’s computer and information technology challenges and what steps the office is taking to address them. In addition, during our interviews of National Forest System and State and Private Forestry officials and representatives from industry, state government, and nonfederal groups, we asked their views of the relevance of FS R&D work and what, in their opinion, could be done to improve it. To determine the extent to which FS R&D coordinates its work with other federal agencies to avoid unnecessary duplication of research, we also interviewed officials from other agencies that conduct research similar to that of FS R&D. To identify other federal agencies, we relied on the results of our interviews with FS R&D officials and stakeholders and reviewed National Science Foundation data to identify any additional agencies that conduct research and development similar to FS R&D that were not identified by the officials we interviewed. From our comprehensive list of federal agencies, we selected a nongeneralizable sample of five agencies: the Agricultural Research Service within the Department of Agriculture; the Office of Energy Efficiency and Renewable Energy within the Department of Energy; the Environmental Protection Agency; the National Oceanic and Atmospheric Administration within the Department of Commerce; and the U.S. Geological Survey within the Department of the Interior. We also reviewed results from the American Customer Satisfaction Index, the survey FS R&D uses to assess customer satisfaction. Although the response rate for this survey was limited, it is comparable to the rates obtained in surveys used to assess customer satisfaction with other agencies. We conducted this performance audit from October 2009 through October 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following seven figures depict and identify the mission, geographic coverage, research facility locations, and research subject areas of the seven FS R&D research stations. This appendix provides data on FS R&D spending and personnel trends across the research stations. Table 2 shows yearly spending by the research stations. Table 3 shows, for each research station, FS R&D spending using external funding from fiscal years 2000 through 2009, as well as the average annual change in funding during that period. Table 4 shows the amount of external funding provided to FS R&D from fiscal years 2005 through 2009, by source. Because other federal agencies provide the majority of external support to FS R&D, their contributions are shown by agency. Table 5 shows the number of FS R&D employees by employee type— permanent, term, and temporary—for each research station during fiscal years 2006 through 2009. In addition to the individual named above, Steve Gaty, Assistant Director; Ulana Bihun; Ellen W. Chu; Carol Henn; Richard P. Johnson; Paul Kazemersky; Lesley Rinner; Kelly Rubin; Jacqueline Wade; Tama Weinberg; and Melissa Wolf made key contributions to this report. | In recent decades, managing the nation's public and private forests and rangelands has become increasingly complex, requiring a sound understanding of science and science-based tools to address these complexities. The Department of Agriculture's Forest Service maintains a research and development program (FS R&D) to help provide scientific information and tools. GAO was asked to examine (1) the scope of research and development carried out by FS R&D and some of its resulting accomplishments, (2) trends in resources used in performing FS R&D work and the effects of those trends on its research efforts and priorities, and (3) recent steps FS R&D has taken to improve its ability to fulfill its mission and challenges it faces in doing so. In conducting this review, GAO analyzed FS R&D funding data for fiscal years 2000 to 2009 and staffing data for fiscal years 2006 to 2009 and interviewed officials from FS R&D, other federal agencies, and nonfederal entities. The scope of FS R&D's work spans a range of research organized into seven strategic program areas: invasive species; inventory and monitoring; outdoor recreation; resource management and use; water, air, and soils; wildland fire; and wildlife and fish. Using funds appropriated to it, as well as funds from authorized external sources such as universities and other federal agencies, FS R&D operates a system of research stations, which in turn oversee laboratories, experimental forests, and other research locations nationwide. According to end users of FS R&D's work, its accomplishments are many and varied, including the Forest Inventory and Analysis program, which provides long-term data on the nation's forests; efforts to identify and control invasive pests; and software applications to quantify the environmental benefits of urban forests. Nevertheless, end users also identified areas requiring additional attention by FS R&D, such as social science research to better understand human interaction with natural resources. Overall, spending by FS R&D--using both its own appropriated funds and resources from external sources--remained relatively flat during fiscal years 2000 through 2009, with an average annual increase of 3.2 percent, or 0.8 percent when adjusted for inflation; funding from external sources represented a small but growing portion of the total. Trends in spending varied across research stations, with some experiencing increases and others decreases. In response to these trends, many stations reduced their staffing levels and increasingly sought support from external sources. While doing so has had advantages, it has changed the way FS R&D carries out its work and sets research priorities. For example, because external funding is often short term in nature, reliance on this funding may lead FS R&D to address more short-term research issues. FS R&D has taken steps to improve its ability to fulfill its mission in several areas, including increasing its efforts to deliver knowledge and tools to end users and involving end users in setting research agendas; improving funding allocation processes; and increasing coordination with other federal research agencies. Despite these efforts, challenges persist, particularly in the area of science delivery--that is, how research results are communicated. While FS R&D has created a more formal system for science delivery at multiple levels within the agency, and several research stations have specific programs dedicated to science delivery, numerous officials and end users told GAO that FS R&D places greater emphasis on peer-reviewed journals as a means of science delivery than on other types of science delivery efforts, such as workshops, that are often more useful to end users. According to these officials, the performance assessment system for FS R&D researchers often reinforces this bias. Without improved delivery of research results, land managers and others may be unable to fully benefit from the agency's work. FS R&D officials also reported several challenges that impede their ability to conduct their day-to-day research, including computing and information technology, human capital, and other administrative issues. GAO recommends that the Forest Service assess the effectiveness of recent steps FS R&D has taken to improve science delivery and take steps to ensure that individual performance assessments better balance the various types of science delivery activities. In commenting on a draft of this report, the Forest Service agreed with GAO's findings and recommendations. |
FWS, operating through its headquarters and eight regional offices, is responsible for managing 1,586 of the listed species found in the United States (see fig.1). NMFS, operating through its headquarters and five regional offices, is responsible for managing 96 listed species. In addition, the Services have proposed an additional 41 species for listing, but as of January 2017 had not yet made a final determination on listing those species. Additionally, since enactment of the ESA, the Services have delisted 76 species—47 as a result of recovery efforts, 10 due to the species’ extinction, and 19 because of data errors in the original listing. The Services’ Section 4 programs encompass all actions related to listing or delisting species, designating or revising critical habitat, and conducting 5-year status reviews for listed species. Regarding listing or delisting a species, the process begins either through a petition submitted by an individual, group, or state agency or a review initiated by one of the Services. For petitions to list a species, the Service with jurisdiction over the species follows a multi-step process to determine if the listing of the species is warranted, as depicted in figure 2. For the species that the relevant Service determines warrant listing, the Service issues and publishes a proposed rule in the Federal Register. The issuance of a proposed or final rule to list a species is generally governed by procedures prescribed in the ESA and Administrative Procedure Act, such as providing opportunities for the public to submit additional information and comment on proposed rules. After evaluating any additional information and comments, if the relevant Service determines that the species is threatened or endangered, it generally issues a final rule to add the species to the respective list. FWS also maintains a “candidate” list for those species it determines warrant listing, but has determined that the immediate listing of the species is precluded by work on higher priority listing actions, such as actions for other species facing greater threats. Each year FWS publishes a Candidate Notice of Review that documents the Service’s re-evaluation of the status and threats facing each candidate species to determine whether the species should be removed from the candidate list and either proposed for listing or withdrawn from further consideration. As of December 2016, there were 30 species identified by FWS as candidates for listing. When a species is proposed for listing, the act requires the Services to concurrently consider whether there are areas essential to the species’ conservation and if so, to propose designation of critical habitat for the species. Critical habitat may include areas occupied by the species— such as areas that provide food, water, cover or shelter, or sites for breeding and rearing offspring—as well as unoccupied areas that the Services determine are essential for the conservation of the species. As of January 2017, the Services had collectively designated critical habitat for 846 species listed as endangered or threatened in the United States. In addition, Section 4 of the ESA requires the Services to review the status of each listed species at least once every 5 years. The purpose of the 5-year status review is to evaluate whether a listed species should be delisted, reclassified from an endangered to threatened species (downlisted) or from a threatened to endangered species (uplisted), or if its classification should not change. In 1982, Congress amended the ESA to establish statutory deadlines for the Services to complete Section 4 actions associated with listing, delisting, critical habitat designations or revisions, and 5-year status reviews. According to the accompanying Conference Committee report, the intended purposes of the amendments were to “expedite the decisionmaking process and to ensure prompt action in determining the status of the many species which may require the protections of the Act.” Congress also amended Section 11 of the act to authorize citizens to file suits against the Services for failing to perform actions by the deadlines imposed under Section 4. Each of the specific Section 4 actions and their associated statutory deadlines are described in table 1. For decades, FWS has faced challenges in implementing its Section 4 program, in part because of a high volume of litigation and petitions seeking to add a large number of species to the threatened and endangered species lists. For example, in 2007, FWS received two “mega-petitions,” collectively requesting the listing of 674 species in the Southwest and Mountain-Prairie regions. In 2010, another “mega-petition” was submitted requesting the listing of 404 southeast aquatic species. During fiscal years 2005 through 2015, FWS received 170 petitions to list 1,446 species. According to a 2010 FWS report to Congress, petitions to list species are an integral aspect of endangered and threatened species protection. The report further stated, however, that FWS does not have the capability to postpone action on petitions because of statutory deadlines or to balance that work with other Section 4 program actions. The report also indicated that any delay in making a petition finding could lead to litigation for which FWS has no sufficient legal defense. As a result, with limited resources and a significant petition workload with statutory deadlines, FWS has been vulnerable to and has experienced a high volume of litigation that has affected much of FWS’s Section 4 program since the early 1990s. Beginning in fiscal year 1998, and in each year thereafter, annual appropriations acts have established statutory caps on the funds available for FWS to implement certain provisions within its Section 4 program. According to FWS officials, the initial spending cap was established to limit the amount that could be spent on listing actions so that funds would be available for other Section 4 actions, such as developing and completing recovery plans. Subsequent appropriations acts established additional spending caps specific to 90-day and 12- month petition findings, critical habitat designations, and foreign species- related listing actions. During fiscal years 2005 through 2015, overall funding for FWS’s listing and critical habitat actions averaged about $20 million per year (see fig. 3). Based on our review, we found that a variety of plaintiffs filed 141 deadline suits against the Services for allegedly failing to comply with statutory deadlines for Section 4 actions involving 1,441 species during fiscal years 2005 through 2015. Approximately 86 percent of the suits (122 of 141) were filed against FWS, about 10 percent (14 of 141) were filed against NMFS, and about 4 percent (5 of 141) were filed against both Services (see app. I for a list of the 141 deadline suits). On average, about 13 deadline suits were filed each fiscal year, ranging from 5 deadline suits in fiscal year 2015 to 33 suits in fiscal year 2010 (see fig. 4). The deadline suits filed against the Services involved allegedly missing deadlines across the range of Section 4 actions, including listing, delisting, designating or revising critical habitat, and conducting 5-year status reviews. Figure 5 provides information on the number of suits that were filed across the 11-year period based on the specific Section 4 action involved. Additionally, the 141 deadline suits included Section 4 actions for a total of 1,441 unique species (see app. II). The majority of the suits (93 of 141) centered on an action for a single species, such as allegedly missing the deadline to issue a 90-day finding on a petition to list a specific species, but about one-third of the suits (48 of 141) involved actions for multiple species. For example, a 2009 suit filed by WildEarth Guardians alleged that FWS failed to make 90-day findings for two petitions it had submitted to list 674 Rocky Mountain and Southwestern species. Similarly, in 2005, California State Grange—a nonprofit organization promoting agriculture and rural family farm units in California—filed a suit against FWS for allegedly failing to conduct 5-year status reviews for 194 listed species located in California. Factoring in species involved in the suits as well as the specific Section 4 actions at issue, we found that collectively, the deadline suits comprised a total of 1,673 actions. FWS was responsible for the majority of the actions (1,545 of 1,673), NMFS was responsible for 120 of the 1,673 actions, and the two agencies worked jointly on 8 actions. Table 2 provides a breakdown—by fiscal year and type of Section 4 action—of the total number of actions involved across the deadline suits filed against the Services during fiscal years 2005 through 2015. See appendix II for additional information on the number and type of actions specific to each agency. Across the deadline suits filed during fiscal years 2005 through 2015, 44 different lead plaintiffs representing a variety of interests filed suits against the Services (see table 3). However, two environmental groups, the Center for Biological Diversity and WildEarth Guardians, collectively filed more than half of the suits (73 of 141). The Center for Biological Diversity was the most active plaintiff, filing a total of 46 deadline suits against the Services over this period for allegedly missing deadlines for completing 90-day and 12-month findings on petitions to list hundreds of species. Trade associations, representing businesses and industry such as the California Cattlemen’s Association and Florida Home Builders Association, filed suits against FWS for allegedly missing deadlines related to 90-day and 12-month findings on petitions to delist threatened and endangered species as well as allegedly missing deadlines in conducting 5-year status reviews for a number of species. Based on our analysis, we found that the majority of ESA Section 4 deadline suits filed in fiscal years 2005 through 2015 were resolved through negotiated settlement agreements that established schedules for the Services to complete the actions involved in the suits. Otherwise, the settlement agreements did not affect the substantive basis or procedural rule-making requirements the Services were to follow in completing the actions. Officials from both Services said they prioritized completing actions included in settlement agreements in implementing their Section 4 workloads. NMFS officials indicated that the deadline suits and their resulting settlement agreements did not have a significant effect on the implementation of the agency’s Section 4 program. In contrast, FWS has delayed completing some Section 4 actions to complete those included in settlement agreements. FWS has initiated several changes to its Section 4 program to help prioritize the order in which it addresses its backlog of hundreds of overdue actions and to help increase the efficiency of its Section 4 program, including revising information requirements for listing petitions. The Services resolved the majority of deadline suits filed during fiscal years 2005 through 2015 by negotiated settlement agreements, whereby the parties generally agreed on a schedule for the Services to complete the Section 4 actions at issue in the suits. Specifically, the Services resolved about 72 percent of the suits (101 of 141) through negotiated settlement agreements (see table 4). About 22 percent of the suits (31 of 141) were resolved through voluntary or unopposed dismissal, primarily because the Services had completed the actions involved in the suits and nothing further remained to be litigated. The remaining 9 deadline suits, all involving FWS, were resolved by a court order. Specifically, the courts dismissed 6 of the suits, ruling in favor of FWS. In the other 3 suits, the courts issued orders directing FWS to complete the Section 4 action at issue by an established schedule. According to officials from DOJ and the Services, the agencies coordinate in deciding how to respond to a deadline suit, including whether or not to negotiate a settlement with the plaintiff or proceed with litigation. In reaching its decision, DOJ considers several factors, including whether there may be a legal defense to the suit—such as providing information establishing that the agency took action on the finding at issue or that the plaintiff lacked standing—and the likelihood that the government could obtain a favorable outcome. The officials said that most deadline suits are resolved through a negotiated settlement agreement because in the majority of them, it is undisputed that a statutory deadline was missed. When negotiating the terms of a settlement agreement, DOJ officials said they consult with the Services to evaluate their workload, priorities, and available resources to propose a reasonable deadline for making the decisions agreed to under the settlement. DOJ officials said they are guided by a 1986 DOJ memorandum—referred to as the Meese Memorandum—in negotiating settlement terms. Accordingly, officials from DOJ and the Services stated that any agreement to settle a deadline suit would only include a commitment to perform a mandatory Section 4 action by an agreed-upon schedule and would not otherwise predetermine or prescribe a specific substantive outcome for the actions to be completed by the Services. Similarly, for those suits resolved by a court order, DOJ officials said they present what they believe is a reasonable timeframe for the court to consider in establishing a schedule for the Services to complete the action. Most settlement agreements established time frames specific to the Section 4 action at issue, but in some settlement agreements, the Services also agreed to complete additional, related actions within certain time frames. For example, several settlement agreements contained provisions for the Services to complete an action by a certain date as well as a related, contingent action by the applicable Section 4 statutory deadline, such as a 12-month finding for a listing petition, if the 90-day finding concluded that the listing of a species may be warranted. Additionally, in 2010, DOJ sought to have multiple deadline suits—filed by the Center for Biological Diversity and WildEarth Guardians against FWS that were pending in several district courts—transferred and consolidated by the Judicial Panel for Multi-District Litigation (MDL). The MDL panel consolidated 15 deadline suits in the federal district court for the District of Columbia, and in 2011, FWS reached a separate settlement agreement with each of the plaintiffs in these suits. The settlement agreements primarily established schedules for FWS to make hundreds of 90-day and 12-month findings on listing petitions. In the settlement agreement with WildEarth Guardians, FWS also agreed to make either not warranted findings or proposed and final listing determinations for the 251 species that were candidate species in 2010. Each agreement also included provisions that for any action that resulted in a proposed listing rule, the final listing determination would be made in accordance with the one-year period prescribed in the statute. In exchange for these commitments made by FWS, each of the plaintiffs agreed to limits on filing additional listing petitions and deadline suits until the terms of the agreements conclude in fiscal year 2017. According to FWS officials, consolidating these suits and entering into the two settlement agreements helped make FWS’s Section 4 workload more predictable, essentially establishing a five-year work plan that reflected the agency’s priorities for completing overdue Section 4 actions for hundreds of species. Other than agreed-upon schedules for completing Section 4 actions, the settlement agreements and court orders did not affect the substantive basis or procedural rule-making requirements the Services were to follow in completing the actions. For example, the settlement agreements contained provisions specifying that nothing in the agreement should be interpreted to limit or modify the discretion afforded to the Services under the ESA. Similarly, the provisions also stated that the agreements did not change any of the procedures to be followed, or the substance of, any rulemaking action to be completed under the agreement, such as opportunities for public comment on proposed listing rules. These opportunities include submitting comments and additional information to be considered during the status review accompanying a 12-month finding on a listing petition, notice and public comment period on any proposed rule to list a species or designate or revise critical habitat, and notice of issuance of any final rule. Based on our analysis, we found that as of December 2016, the Services collectively completed 1,766 Section 4 actions related to the 104 suits that were resolved by settlement agreements and court orders entered into fiscal years 2005 through 2015. Table 5 provides a breakdown of the outcomes of the decisions the Services made related to these Section 4 actions, based on whether the Services made a positive finding— determining that a listing, delisting, or critical habitat-related action was warranted—or a negative finding, meaning that the Services generally found that the action at issue was not warranted. The Services prioritized completing actions included in settlement agreements and court orders, but FWS delayed working on some Section 4 actions to complete those covered in the agreements and orders. In implementing the Services’ Section 4 programs, officials from both Services said they prioritized completing actions included in settlement agreements and court orders above other Section 4 actions. According to NMFS officials, deadline suits and their resulting settlement agreements during fiscal years 2005 through 2015 did not have a significant effect on the implementation of their Section 4 program. NMFS officials said this is largely because NMFS is responsible for fewer species than FWS, has not received as many of the ‘mega-petitions’ for listing species that FWS has, and has largely been able to manage its workload without being compelled to act in response to deadline suits. The officials added that in many instances in which a deadline suit was filed, NMFS was already working on the Section 4 action at issue and therefore making a decision by an agreed-to time frame did not significantly alter how NMFS implemented its Section 4 program workload. In contrast to NMFS, FWS has delayed completing some Section 4 actions, including those with statutory deadlines, to complete actions included in settlement agreements and court orders, according to FWS documentation. For fiscal years 2005 through 2015, FWS officials said they have focused much of their Section 4 program on completing actions required under settlement agreements and court orders. This focus has been particularly evident since 2011, when FWS entered into the two MDL settlement agreements that established a five-year workplan for completing hundreds of listing and other Section 4 actions by the end of fiscal year 2017. To fulfill its commitments under these agreements, FWS’s efforts related to listing have required the use of substantially all of its petition and listing budgetary resources, according to FWS documents. In focusing on completing the actions covered by the two MDL settlement agreements, FWS documents indicated that the agency was limited in its ability to undertake work on additional Section 4 actions outside of the agreements. For example, according to an FWS press release announcing positive 90-day listing petition findings for 374 southeastern aquatic species included in one of the 2011 MDL settlement agreements, FWS stated that it was unable to complete 12-month status reviews for these species until fiscal year 2017. The agency explained that this was because of existing commitments made under various settlement agreements and court orders. According to FWS documents, it has not had resources sufficient to complete its backlog of overdue actions and with anticipated resources, it has the capacity to complete a limited number of actions per year. As of September 2016, FWS’s backlog of overdue Section 4 actions included nearly 600 12-month findings on listing petitions and other listing-related actions that FWS has been unable to address while it focused on completing its litigation-related workload. To help prioritize the order in which it addresses its backlog and to help increase the efficiency of its Section 4 program, FWS has initiated several changes to its program. For example, starting in October 2015, FWS implemented a streamlined process for publishing its 90-day and 12- month findings in the Federal Register. Instead of issuing each decision individually, as was done in the past, the streamlined process bundles all 90-day findings on a quarterly basis and 12-month findings biannually and publishes those decisions collectively in the Federal Register. FWS officials said that they anticipate this streamlined approach will result in administrative efficiencies and reduced publishing costs. In March 2016, FWS established a Unified Listing Team with the goal of promoting a more consistent, efficient, and timely petition review process. An initial activity this team undertook included developing a National Listing Workplan for fiscal years 2017-2023. This 7-year workplan lays out a plan for addressing FWS’s backlog of listing petition findings and critical habitat decisions. According to FWS documentation, the workplan will help enable the agency to more effectively and efficiently administer its workload based on the needs of candidate and petitioned species while providing greater clarity and predictability to the public about the timing of its actions. In developing the workplan, FWS utilized its prioritization methodology that was finalized in July 2016. The prioritization methodology outlines the order of priority that FWS will give to species in making 12-month findings on listing petitions, giving highest priority to species considered to be critically imperiled. FWS officials said the agency’s ability to implement its workplan as scheduled is subject to change based on future funding and litigation, which may require FWS to reprioritize its workload. In addition, in September 2016, the Services jointly issued a final rule revising regulations that outline the process and information required for listing petitions. The Services stated that the purposes for the revisions were “to improve the content and specificity of petitions to enhance the efficiency and effectiveness of the petition process to support species conservation.” Among other revisions, petitions will be limited to one species per petition, and petitioners will be required to provide a “complete, balanced representation of the relevant facts” with respect to the Services’ initial 90-day finding. According to officials from the Services, improving the quality of information submitted in support of listing petitions will help enable the Services to more efficiently process the petitions and issue decisions in a timelier manner. We provided a draft of this report for review and comment to the Department of Commerce, the Department of the Interior, and the Department of Justice. The Departments of Commerce, the Interior, and Justice each provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Attorney General, the Secretary of Commerce, the Secretary of the Interior, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III. Section 4 deadline suits include citizen suits filed against the U.S. Fish and Wildlife Service (FWS) and National Marine Fisheries Service (NMFS) to compel compliance with statutory deadlines for certain actions under Section 4 of the Endangered Species Act (ESA). Section 4 of the act includes statutory deadlines for the Services to complete certain mandatory actions, including making findings on petitions to list or delist species, designating or revising critical habitat, and conducting 5-year status reviews of listed species. Table 6 provides information on each of the deadline suits filed against the Services during fiscal years 2005 through 2015, including the date the suit was filed, the district court in which it was filed, a summary of the Section 4 action at issue, and the disposition of the suit. Section 4 deadline suits include citizen suits filed against the U.S. Fish and Wildlife Service (FWS) and National Marine Fisheries Service (NMFS), to compel compliance with statutory deadlines for certain actions under Section 4 of the Endangered Species Act. Section 4 of the act includes statutory deadlines for the Services to complete certain mandatory actions, including making findings on petitions to list or delist species, designating or revising critical habitat, and conducting 5-year status reviews of listed species. Table 7 provides information on the taxonomic groups of species involved in the Section 4 deadline suits filed during fiscal years 2005 through 2015. Table 8 provides information on the distribution of the species managed by FWS and NMFS as well as distribution by each of the agency’s respective regions. Table 9 provides information on the number of actions involved in the Section 4 deadline suits by agency. In addition to the contact named above, Alyssa M. Hundrup (Assistant Director), Carolyn Blocker, Ellen Fried, Cindy Gilbert, Richard Johnson, Michael Meleady, Sara Sullivan, and Manuel Valverde made key contributions to this report. | To receive protection under the ESA—enacted to conserve at risk species—a species must first be added to one of the federal lists of threatened or endangered species. FWS and NMFS jointly administer the ESA and have programs that encompass actions related to Section 4 of the ESA. Some of these actions—such as making findings on petitions filed by a person or group requesting addition or removal of species from one of the lists—must be completed by specific statutory deadlines. GAO was asked to review deadline litigation brought under Section 4 of the ESA. This report examines (1) the number and scope of deadline suits filed against the Services during fiscal years 2005 through 2015 under Section 4 of the ESA, and (2) the outcomes of these suits and the effect, if any, the suits had on the Services' implementation of their Section 4 programs. GAO reviewed the ESA and agency documents; obtained a list of Section 4-related suits filed during fiscal years 2005 through 2015 from the Department of Justice, which is responsible for representing the Services; identified from the list those that were deadline suits and compared the list with other sources to confirm reliability; analyzed the suits, including documentation on how they were resolved; and interviewed Justice, FWS, and NMFS officials. The agencies provided technical comments on this report. GAO found that plaintiffs filed 141 deadline suits against the U.S. Fish and Wildlife Service (FWS) and National Marine Fisheries Service (NMFS) for allegedly failing to take actions within statutory deadlines under Section 4 of the Endangered Species Act (ESA) during fiscal years 2005 through 2015 (see figure). Section 4 contains mandatory deadlines for such actions as making findings on petitions to list or delist species as threatened or endangered. The suits involved 1,441 species and cited a range of Section 4 actions, but most suits were related to missed deadlines for issuing findings on petitions to list species. Figure: Number of Endangered Species Act Section 4 Deadline Suits Filed, Fiscal Years 2005-2015 The majority of deadline suits filed during fiscal years 2005 through 2015 were resolved through negotiated settlement agreements that established schedules for the agencies to complete the actions involved in the suits. Agency officials said that most deadline suits are resolved through settlement because it is undisputed that a statutory deadline was missed. Other than setting schedules for completing Section 4 actions, the settlement agreements did not affect the substantive basis or procedural rule-making requirements the Services were to follow in completing the actions, such as providing opportunities for public notice and comment on proposed listing rules. Officials also said they prioritize completing actions in settlement agreements in implementing their Section 4 programs. NMFS officials indicated that work resulting from deadline suits did not have a significant effect on the implementation of their program, in part because NMFS has not had a high number of petitions to list species. In contrast, FWS has delayed completing some actions to complete those included in settlement agreements. FWS has initiated several changes to help improve Section 4 program implementation, including developing a 7-year workplan that prioritizes the order for completing overdue actions and revising information requirements for listing petitions. |
According to FDA, the goal of its regulatory science initiative is to develop and apply the best available scientific data, knowledge, methods, and tools to reduce uncertainty and make regulatory decisions more efficient and consistent. In doing so, the agency seeks to ensure public access to products that are manufactured or processed in a high quality manner and monitored to ensure safety and quality during real-world use. FDA’s 2011 strategic plan identified eight priority areas for regulatory science, of which seven related to medical products where the agency determined that new or enhanced engagement is essential to the continued success of the agency’s public health and regulatory mission. The agency then added an additional priority area related to global product safety in 2013. (See table 1.) FDA conducts work to advance regulatory science through intramural research and extramural collaborations, such as collaboration with other government agencies, academia, industry, patient organizations, professional associations, and other stakeholders. Targeted funding for regulatory science at FDA comes from a number of centers and offices, including the Center for Biologics Evaluation and Research (CBER), the Center for Drug Evaluation and Research (CDER), and the Center for Devices and Radiologic Health (CDRH), all within FDA’s Office of Medical Products and Tobacco. These centers are responsible for approving medical products (biologics, drugs, and devices, respectively) and monitoring their ongoing safety once approved. In addition, other offices support the regulatory mission of FDA, such as the Office of International Programs (OIP) in the Office of Global Regulatory Operations and Policy, and those under the Office of the Chief Scientist—whose mission is to provide strategic leadership, coordination and expertise—including the National Center for Toxicological Research (NCTR), the Office of Counterterrorism and Emerging Threats (OCET), the Office of Minority Health (OMH), the Office of Regulatory Science and Innovation (ORSI), and the Office of Women’s Health (OWH). (See fig. 1.) Additional broad regulatory science efforts may occur within these centers and offices as well as others at FDA. FDA does not have measurable goals, such as targets and time frames, to assess its progress in advancing regulatory science. Further, FDA does not consistently track information on center and office funding decisions for each FDA priority area, leaving it without important information needed to conduct strategic planning on the agency’s regulatory science priorities. FDA’s 2011 and 2013 strategic planning documents do not identify measurable goals, such as targets and time frames, for assessing progress in regulatory science. For example, the 2013 strategic planning document states that FDA will measure progress by enumerating and describing product-specific advisory committees and staff training opportunities related to the priority areas. However, the document neither sets any targets or time frames, nor establishes further outcome-based measures for what FDA hopes to achieve for either a given priority area or regulatory science in general. Likewise, FDA’s 2011 strategic planning document presents strategies for addressing the priority areas, but the document does not include measurable goals within these strategies. For example, it indicates that addressing the toxicology priority area would involve developing human and animal models to predict adverse responses, but it does not provide the number of models FDA intends to develop or the adverse responses for which the models are intended. FDA’s lack of specific measurable goals is reflected in the progress report that it completed for FDASIA. In that report, FDA includes examples of achievements but it does not specify overall results, such as the number of tests and technologies developed for the efficacy and safety of medical products, which the 2013 planning document indicated would help the manufacturing and quality priority area. In addition, FDA generally did not link the categories that it used to measure the adoption of regulatory science to a specific priority area. Specifically, in the progress report, only four of the eleven categories are linked to the FDA priority areas that they address. For example, while FDA reported one or more related FDA priority area for each guidance document included in the progress report, it did not provide the same information for the reported training examples or the projects included in the Drug Development Tool Qualification programs. According to our work on leading practices for strategic planning, an agency’s strategic goals should explain what results are expected from the agency and when to expect those results. It is also critical that the strategic planning documents describing these goals include specific actions and implementation schedules for how the agency is to achieve these goals. Without measurable goals, clear targets, and implementation time frames, FDA cannot provide a complete assessment of progress made in the regulatory science areas it has designated as priorities. According to FDA officials, it is difficult to measure the progress made in the priority areas because the priority areas are very broad and the underlying science is continually changing. Furthermore, the adoption of discoveries from regulatory science research can take years. Because the priority areas are managed across multiple organizational units, they are not being overseen by a specific center or office. However, we have previously recognized that while measuring the performance of science- related projects can be difficult, science-related agencies like FDA should still have clearly established goals. One of the centers that targets funding at regulatory science has a plan to track the progress of its regulatory science activities, but this plan does not include measurable goals that could be used to assess progress. Specifically, CDRH officials told us the center has drafted a logic model to identify and track the short- and long-term outcomes of funds it spends on regulatory science research. The center expects the model to be finalized in 2017. However, the current draft does not include measurable goals to assess progress. In addition, officials from most of the other centers and offices that obligated funds targeted at regulatory science activities in fiscal years 2010 through 2014 told us they are not developing such a model. One office also stated that the CDRH model may not be applicable to their work and that they may not have the resources needed to customize and implement such a model. In another effort, FDA officials indicated that the agency recently initiated internal discussions primarily to share information, and if feasible and desired, harmonize best practices within the agency. FDA officials also said that this group would discuss evaluation strategies and processes and perhaps eventually arrive at a common FDA approach to thinking about research outcomes and impact. This group first met in October 2015 to discuss current efforts, and FDA officials reported in April 2016 that it is still in the early stages of sharing information. FDA does not have the information necessary to track funding and conduct strategic planning agency-wide for its regulatory science priorities because most of the centers and offices did not collect information on the FDA priority areas that were addressed by the projects they funded. As a result, information on funding for each priority area is generally not readily available and therefore the majority of these centers and offices had to retrospectively assign FDA priority areas to each project at the time of our review. Six centers and offices—CDRH, NCTR, OCET, OIP, OMH, and OWH—fund regulatory science projects that address multiple FDA priority areas, but generally did not collect information about how funds are distributed across those priority areas. All six had to retrospectively identify associated FDA priority areas for our review. For example, while CDRH and NCTR collect some information about research priorities for the projects they fund, the information they collect does not fully align with the FDA priority areas. Specifically, CDRH collects information on its own set of priorities and NCTR collects information on FDA goals. While this information is partially aligned with the FDA priority areas, there is not a one-to-one correspondence, thus information on FDA priority areas is not readily available. According to NCTR officials, they began capturing information on the FDA priority areas in 2016. Further, OMH asks researchers to identify the unmet regulatory science need that their proposed research addresses, but the researchers are not asked to identify specific FDA priority areas. In addition, OCET and OIP officials told us that all of their funded projects are related to one FDA priority area, and therefore they do not collect information about other priority areas. However, there is no guarantee they will not fund projects related to other priority areas in the future. Further, while OIP told us they focus on just the FDA priority area global product safety, the data they provided for our review also identified an additional priority area, information sciences, as an FDA priority area being addressed by some of their projects. Three offices, CBER, CDER, and ORSI, collect some information on the FDA priority areas addressed by their targeted projects. CBER and CDER have not collected this information at the time of funding. Rather, CBER asks researchers to provide it as part of required annual reports and CDER officials indicated that CDER has made a similar request of researchers since 2012. Officials from both CBER and CDER said that in the near future their centers plan to track the FDA priority areas at the time projects are proposed for funding. ORSI asks researchers to identify FDA priority areas for its broad agency announcements, which are competitive funding announcements for extramural research programs and accounted for 21 percent of ORSI’s funding for the projects that we reviewed. ORSI does not ask this of researchers applying for its other regulatory science funding programs, although the announcements for the intramural grants state that the proposals should align with one or more of the eight original priority areas. For these other funding programs, ORSI confirms that a project proposal is related to regulatory science, but does not document any specific FDA priority area for that project. While each center or office may be funding projects that are generally consistent with the priority areas that FDA established, without consistent information from each center or office detailing those connections, the agency is not able to examine obligations across specific priority areas. Standards for internal control in the federal government state that complete and accurate data are needed to make operating decisions and to allocate resources. In addition, to ensure program goals are met, our work encourages agencies to manage efforts that cut across the agency. Without complete information on the allocation of funding across priority areas by centers and offices, FDA cannot ensure that funding is being distributed in line with its strategic plan. In response to our request, FDA identified projects targeted at regulatory science funded from fiscal years 2010 through 2014 totaling more than $507 million. Annual obligations for these projects generally increased during that time. This funding varied across centers and offices, ranging from approximately $450,000 to about $200 million. In addition, while FDA does not systematically track regulatory science obligations by priority area, the agency’s retrospective review showed wide funding variation across priority areas, ranging from about $3 million to approximately $203 million. FDA obligations targeted at regulatory science projects increased from about $73 million in fiscal year 2010 to a peak of about $123 million in fiscal year 2013, then a decline to about $110 million in fiscal year 2014. These funds represent those obligations targeted at specific regulatory science projects and do not include FDA obligations for other activities benefitting regulatory science for which the agency was not able to quantify spending at a project level. (See fig. 2.) Over the 5 years we examined, FDA obligated nearly $507 million for 1,279 regulatory science projects, an average of approximately $400,000 per project. Total obligations for individual projects ranged from $430 to $9.1 million during this time. While FDA obligated funds for some projects for a single year, FDA typically obligated funds for projects in multiple years. FDA obligated 80 percent of these regulatory science funds to intramural projects—those led by FDA researchers. For example, the Office of the Chief Scientist’s grant program includes five intramural grant programs; for these programs, FDA scientists first submit concept papers that are ranked and then submit full proposals that are peer-reviewed. The remaining 20 percent of projects were either extramural or a combination of intramural and extramural. For example: Broad Agency Announcements are extramural, competitive funding announcements supporting regulatory science research programs. Centers of Excellence in Regulatory Science and Innovation are extramural partnerships between FDA and universities to promote cross-disciplinary regulatory science training and research. The Critical Path Initiative started in 2004 to improve medical product development, evaluation, and manufacturing and is used to support intramural research and external collaborations. Most Critical Path Initiative projects were intramural, except for ten projects that were both intramural and extramural. Of the total funding targeted at regulatory science projects, 48 percent was obligated for projects awarded through a non-competitive award process, 39 percent through a competitive award process, and 13 percent through a combination of competitive and non-competitive processes. Officials reported that FDA has traditionally funded regulatory science projects with FDA general appropriations, but projects funded within CDER have also been supplemented by funds collected under user fee acts—specifically the Prescription Drug User Fee Act (PDUFA) and Generic Drug User Fee Amendments of 2012 (GDUFA)—that authorize the collection of funds from industry (including the pharmaceutical and biotechnology industries). PDUFA funds accounted for 2 percent of CDER’s total regulatory science obligations for fiscal years 2010 through 2014, ranging from 1 to 4 percent annually, with approximately $300,000 of PDUFA funds obligated each year for regulatory science. Starting in fiscal year 2013, CDER committed to using a portion of its GDUFA funds for regulatory science. GDUFA funds accounted for 65 percent of CDER’s regulatory science obligations in fiscal years 2013 and 2014 combined, about $17 million (70 percent) and $19 million (62 percent), respectively. (See fig. 3.) In 2013, the addition of funds from GDUFA more than doubled CDER’s annual obligations for regulatory science projects for fiscal years 2010-2013. GDUFA and PDUFA are the only two user fee programs that support targeted regulatory science obligations. In addition, FDA indicated that it funded other efforts benefitting regulatory science, but was unable to quantify spending at the project level. For example, CDER officials told us that funds targeted at regulatory science identified for our review represent only a portion of CDER’s investment in regulatory science. They added that lab programs, like those in the Office of Pharmaceutical Quality and the Division of Applied Regulatory Science within the Office of Translational Sciences, fund the bulk of their projects through the normal CDER budgeting process and therefore are not included among the targeted funds. One such CDER project focused on developing an analytical method that sponsors could use to support that proposed generic and brand name forms of estrogen are chemically equivalent. The work resulted in new recommendations for some estrogen analyses, which were incorporated into a guidance document for the development of generic estrogen products. Of the nine centers and offices that obligated funds targeted at regulatory science from fiscal year 2010 through fiscal year 2014, total obligations ranged from approximately $450,000 by OMH to about $200 million by NCTR. The center and offices within the Office of the Chief Scientist— NCTR, OCET, OMH, ORSI, and OWH—accounted for 65 percent of FDA’s total obligations for projects targeted at regulatory science, with NCTR accounting for 60 percent of the total Office of the Chief Scientist’s obligations. Centers with regulatory responsibilities for medical products (CDER, CBER, and CDRH) accounted for 34 percent of the obligations, and OIP accounted for the remaining 1 percent. The average obligation per project by centers and offices ranged from just over $110,000 for OWH to about $1.1 million for OIP. However, the centers and offices that had the highest total obligations were not necessarily the ones that had the highest average obligation at the project level. For instance, OIP had the highest average obligations per project, more than $1 million, yet it had the second lowest total obligations, at about $5.4 million. (See fig. 4.) Similarly, total obligations associated with each FDA regulatory science priority area varied widely, ranging from about $3 million for projects that focused on global product safety to approximately $203 million for projects that focused on the toxicology priority area. (See fig. 5.) Projects that focused on the clinical evaluations and personalized medicine and medical countermeasures priority areas were among those with the greatest obligations. Average obligations per project ranged from about $250,000 for the projects that focused on the manufacturing and quality priority area to approximately $1.1 million for the projects that focused on the global product safety priority area. However, the FDA priority areas that had the lowest total obligations were not necessarily the ones that had the lowest average obligation at the project level. For example, projects that included a focus on global product safety had the lowest total obligations but they had the highest average obligation per project. Similarly, projects that included a focus on social and behavioral science had the second lowest total obligations but they had the second highest average obligation per project. FDA’s centers and offices make decisions about funding that determine which projects to fund, and they generally obligated project funds across several regulatory science priority areas. Specifically, two-thirds of the centers and offices provided obligations to five or more priority areas. For example, CBER provided obligations to projects that collectively focused on every priority area except global product safety. Nevertheless, there were four centers and offices that directed at least half of their regulatory science obligations to a single priority area. 100 percent of OCET obligations were for medical countermeasures. 62 percent of OIP obligations were for global product safety. 56 percent of NCTR obligations were for toxicology. 50 percent of OWH obligations were for clinical evaluations and personalized medicine. While funding for each of the FDA priority areas generally came from a number of different centers and offices, for all but two priority areas, there was one center or office that accounted for the majority of obligations. Global product safety: 100 percent of the obligations from OIP. Toxicology: 82 percent of the obligations from NCTR. Social and behavioral science: 76 percent of the obligations from CDER. Medical countermeasures: 64 percent of the obligations from OCET. Manufacturing and quality: 56 percent of the obligations from CBER. Emerging technologies: 55 percent of the obligations from CBER. Information sciences: 49 percent of the obligations from NCTR. Clinical evaluations and personalized medicine: 43 percent of the obligations from NCTR. FDA reported that the 17 regulatory science projects that we selected for review helped to advance regulatory science. (See appendix I for additional information about each of these 17 projects.) For each of these 17 projects, FDA identified achievements that we classified as dissemination of project findings, internal changes at FDA, and changes by industry and groups outside of FDA. (See fig. 6.) Dissemination of project findings. For 16 of the 17 selected projects, FDA reported that it disseminated the project results in scientific publications, conferences, FDA workshops, or some combination of all three. Such dissemination provides FDA an opportunity to share new information and understanding internally and with the larger scientific community and may contribute to future regulatory science activities. Internal impacts. FDA described internal impacts resulting from each of the 17 selected projects, many of which related to advancing FDA’s scientific understanding in a particular area. For 12 selected projects, FDA reported that the projects resulted in new information about the topic that FDA was using or considering using for future work. For example, one goal of OCET’s project looking at the feasibility of using electronic health records in public health emergencies was advancing FDA’s understanding of the possibilities and limitations of using electronic health data. The findings from this project indicated that structured data in electronic health records could help the agency assess the risk of adverse events, particularly those that are severe. However, the results also indicated that this near real-time data still had some built-in delays and that the data search process cannot be fully automated. This provided FDA with information it can use as it considers using electronic health records in emergency situations. For 9 selected projects, FDA reported that the results led the agency to plan or conduct additional studies or activities that represent the logical next step in the particular area being studied. For example, initial results from an ORSI study of the use of social media to provide early signals of drug safety concerns showed a relationship between data obtained from social media and from FDA’s adverse event reporting system. Using that information, FDA then conducted a retrospective study for 10 safety concerns to see if there was evidence of those adverse events in social media before FDA became aware of them. FDA has since reported that the analysis identified specific limitations of the tool FDA uses in monitoring safety concerns. Other internal impacts were related to changes in agency practices. For 8 selected projects, FDA reported that the results led to the development of standards, methods, tools, or training for FDA internal use. For example, an OIP project was designed to create a tool to help secure global supply chains against the infiltration of counterfeit or substandard products. The project resulted in the production of a “roadmap” that FDA could use to develop such a system. For 5 selected projects, FDA reported that the results led to either a change in guidance or regulation or the decision to not make a previously proposed change. For example, CDER funded a study of surrogate endpoints that could speed the development of new therapies for breast cancer. FDA used results from this study to inform its development of guidance to industry that described study designs in which a surrogate endpoint may be accepted by FDA as reasonably likely to predict the clinical benefit of a drug. In response to a statutory requirement, CDER also examined whether quantitative information could be added to drug advertising to maximize consumer and health care professional understanding of the benefits and risks of the drug. Based on the results of the study, the Secretary of HHS concluded that quantitative information cannot be readily applied to many drugs and therefore it is not appropriate to issue regulations that would require such information to be added to promotional labeling or advertising. For 5 selected projects, FDA reported that the results led the agency to change aspects of its review process. For example, a CBER study designed to improve influenza vaccine efficacy provided information to FDA that has helped in the review of preclinical animal studies that are included in some drug applications. The study also resulted in FDA offering training for its lab members in an approach to vaccine testing that FDA says is a common part of a product review. In addition, an NCTR project developed a knowledge base about liver toxicity and has used that to advise CDER reviewers about drug- induced liver injury risk for products they were reviewing. External impacts. FDA described external impacts for 8 of the 17 selected projects. These projects resulted in the development of tools or proposed standards for use by industry or the implementation or use of new tools or standards by industry or outside organizations. According to FDA, it can take several years for funded research to result in these types of tools. For 7 selected projects, FDA reported that the results led to the development of tools or proposed standards for use by industry. For example, a CDRH study of radio interference with automated external defibrillators led FDA to recommend to an international commission that standards for these types of defibrillators be modified to account for radio interference. Similarly, FDA officials reported that results from a study funded by OWH looking at degradation of absorbable polymers used in some cardiac stents have provided guidance for industry on the design, manufacturing, and regulation of these absorbable stents. For 6 selected projects, FDA reported that industry or outside organizations have made changes based on the results of these FDA- funded projects. For example, FDA told us NCTR’s studies of Bisphenol A, a chemical found in certain plastics, helped FDA and the European Food Safety Authority, resolve public safety concerns. Similarly, CDRH’s study of total disc replacement devices for the spine led to the development of a test guide published through the American Society of Testing and Materials International that is used by multiple manufacturers. FDA told us that the results from these tests were then part of the manufacturers’ submissions for approval of these devices. For several years, FDA has been aware of the need to improve its scientific base and has established multiple regulatory science initiatives, as well as prioritized areas to address that need. FDA projects targeted at advancing regulatory science have led to internal and external impacts in understanding new science associated with medical products. However, the agency has not identified measurable goals in their strategic plans or reports on strategic priorities, such as specific targets and time frames, for regulatory science. Such goals are a best practice for strategic planning and could enable FDA to assess and report its progress in addressing its identified priority areas and strategically plan and allocate resources for its broader regulatory science initiative. The agency faces another obstacle to its strategic plan without consistent information about centers and offices’ distribution of targeted regulatory science funding among those identified priority areas. The individual centers and offices that decide which projects to fund are either tracking the priority areas in different ways or not all. Our prior work encourages the proactive collection of consistent information and standards for internal control recommend federal agencies have complete and accurate data for making funding decisions. Systematic tracking by each center or office is needed for the agency to examine obligations across, or progress within, specific priority areas and would help the agency to strategically plan for its regulatory science initiative as a whole. In order to improve FDA’s strategic planning for regulatory science efforts, we recommend the Secretary of Health and Human Services direct the Commissioner of FDA to take the following two actions: 1. Develop and document measurable goals, such as targets and time frames, for its regulatory science efforts so it can consistently assess and report on the agency’s progress in regulatory science efforts. 2. Systematically track funding of regulatory science projects across each of its priority areas. We provided a draft of this report to HHS. HHS concurred with our recommendations and provided written comments, which are reprinted in appendix II. In its written comments, HHS agreed with the importance of strategic planning for regulatory science. HHS concurred with our recommendation that FDA should develop and document measurable goals; HHS suggested that agency documents with a targeted focus, such as user fee commitment letters and specific planning documents, are a more appropriate place for such goals than an agency-level strategic plan. In our recommendation to HHS, we do not specify where such goals should be documented. We recognize, as HHS noted in its comments, that advancing regulatory science is an uncertain and non- linear path that can make it challenging to set targets for specific accomplishments. Nevertheless, FDA should develop measureable goals that are related to the impacts that are discussed in HHS’s comments, including the effectiveness and efficiency of FDA’s regulatory review, new pathways for medical product development, enhancements in the agency’s ability to provide useful guidance to sponsors, and new technologies to monitor manufacturing and real world use of approved medical products. As all but one of the agency-wide priority areas are being addressed by projects funded by multiple centers and offices, it is important that FDA develop and document measurable goals that encompass the efforts of multiple centers and offices. HHS also concurred with our recommendation to systematically track funding across FDA’s regulatory science priority areas, and the department identified recent and planned activities of specific centers to improve such tracking We support these efforts and reiterate the importance of FDA systematically track funds agency-wide for each of the priority areas it developed. Systematic tracking of both progress on measurable goals and funding is essential for FDA to strategically plan its regulatory science initiative across the agency. In addition to these general comments, HHS provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. We examined the achievements related to regulatory science for 17 projects funded by the Food and Drug Administration (FDA). Below is a brief description of each project, including examples of achievements resulting from that project, according to information provided by FDA. Title: Created new approaches to identify and understand critical product quality attributes of complex products, such as stem cell- derived products (both animal and human) and complex systems of medical devices. Center or office funding project: Center for Biologics Evaluation and Research (CBER) Priority area(s) covered: Emerging technologies and medical countermeasures Funding obligations and time period of funding: $4.513 million (fiscal years 2010 through 2014) Description of project: Certain stem cells are being used in clinical trials for multiple medical conditions. However, multiple factors, including donor variation and culture conditions may affect the clinical performance of the stem cells either in terms of safety, efficacy, or both. This project was designed to identify product attributes that correlate with specific outcomes and to increase understanding of factors that might affect stem cell-derived safety and efficacy. This project also included the development of methods to quantify various attributes of these cells. Select achievements reported by FDA: The project’s findings affected FDA reviewers’ understanding of stem-cell products. Further, the findings have helped producers of stem cell products with their studies, specifically noting that some sponsors have adopted quantitative methods for assessing the stem cells. The findings were important in the scientific reasoning in two guidance documents. Additional projects have been planned to further advance understanding in this area. Title: Developing new approaches for measuring the quality of next- generation smallpox vaccines. Center or office funding project: CBER Priority area(s) covered: Medical countermeasures Funding obligations and time period of funding: $0.537 million (fiscal years 2010 through 2014) Description of project: The goal of the project was to evaluate factors that affect the safety and efficacy of smallpox vaccines and to develop new methods to evaluate smallpox vaccine quality— specifically, potency. The studies were aimed at helping the development of new smallpox vaccines. Select achievements reported by FDA: Results from the project informed FDA’s understanding of various characteristics of the vaccine, as well as factors that influenced immune responses to the vaccine. FDA also reported that one of the two alternative approaches for evaluating the potency of smallpox vaccines used in clinical trials was considerably faster than traditional methods and could be adapted for future use. In addition, further studies are ongoing in other tests related to smallpox vaccines. Title: Correlates of protective immunity against influenza. Center or office funding project: CBER Priority area(s) covered: Manufacturing and quality, emerging technologies, and medical countermeasures Funding obligations and time period of funding: $1.209 million (fiscal years 2010 and 2011) Description of project: This project was designed to support the development of seasonal and pandemic influenza vaccines by identifying mechanisms that contribute to immunity and developing measures of those responses. Select achievements reported by FDA: This project resulted in the development of a new method that was useful for testing influenza vaccines, the discovery of certain characteristics that are important for producing protective immunity, and the determination of the amount of a vaccine ingredient that is needed to produce immunity. FDA reported that the project provided a foundation for training lab members in development and validation of an approach to vaccine testing, while also providing the basis for an international study led by CBER to assess reproducibility in one of the methods developed in this project. Title: Evaluating a surrogate endpoint that could speed development of new therapies for breast cancer. Center or office funding project: Center for Drug Evaluation and Research (CDER) Priority area(s) covered: Clinical evaluations and personalized medicine Funding obligations and time period of funding: FDA could not determine funding as the research was conducted as part of FDA employees’ regular responsibilities. Description of project: FDA conducted research to assess and evaluate the validity and potential applications of a surrogate endpoint—a measure that can predict, but is not itself a measure of, benefit—in trials of treatments for women with breast cancer. By collaborating with an international working group, FDA researchers were able to use data from more than 12,000 patients. Select achievements reported by FDA: FDA reported that researchers found a potential relationship between the surrogate endpoint and survival. This then provided important information to drug developers for the design of future drug trials. It also informed the development of guidance to industry. Title: Completed three studies and a literature review assessing whether quantitative information could be successfully added to television and print advertisements to maximize audience understanding of benefit information in the piece, including the type of benefit information, different combinations of statistical format, and different graphic representations. Center or office funding project: CDER Priority area(s) covered: Social and behavioral science Funding obligations and time period of funding: $0.270 million (fiscal year 2010) Description of project: This project was composed of four studies designed to investigate whether quantitative information in direct-to- consumer advertisements is helpful for consumers. FDA was asked by Congress to investigate this topic to determine whether such information about the benefits and risks of prescriptions drugs in a standardized format would improve health care decision-making by clinicians, patients, and consumers. Select achievements reported by FDA: FDA reported that, based on these studies and other efforts, the inclusion of certain types of quantitative information can be helpful in some limited circumstances, but a standardized format cannot be readily applied to many drugs. It was therefore not appropriate to issue new regulations that would require such information on promotional labeling or print advertising. The findings have been used internally by an FDA working group that explores direct-to-consumer advertising. FDA reported that the findings have also been used by the external research community. Title: Experimental study of patient information prototypes. Center or office funding project: CDER Priority area(s) covered: Social and behavioral science Funding obligations and time period of funding: $1.613 million (fiscal year 2010) Description of project: This project examined different methods of presenting prescription drug information to patients who obtained a prescription. The study was designed to compare the format in which information is presented, the inclusion of additional context or not, and the order of information about warnings versus information about the efficacy of the drug. Select achievements reported by FDA: The study provided information about patient preferences for and increased comprehension of single page prototypes over the currently available format. These findings have informed FDA’s development of the Patient Medication Information Initiative, which will consider a new regulation to require all prescription drugs to have a single document standardized in content and format that provides prescription information to patients in an accurate, easily understood, and balanced form. Title: Development and validation of a standard test method to assess for impingement of artificial total disc replacement devices in order to provide scientific basis for regulatory guidance and better predict which devices will be clinically successful. Center or office funding project: Center for Devices and Radiological Health (CDRH) Priority area(s) covered: Clinical evaluations and personalized medicine and emerging technologies Funding obligations and time period of funding: $0.753 million (fiscal years 2010 through 2012) Description of project: The primary goal of the project was to develop a new scientific tool to characterize impingement— unintended contact between surfaces of the device—of total disc replacement devices. Impingement can be linked to premature mechanical device failure and was not accounted for in preclinical test methods. As a result, bench testing was not accurately mimicking wear and damage that was being observed clinically. Select achievements reported by FDA: The project resulted in the development of an impingement test guide published through a professional association. As a result, multiple device manufacturers used the guide to perform impingement testing and FDA incorporated those results into their decision making on premarket approval applications and investigational device exemption submissions. Title: Developed standards to reduce the risk of misconnection between different types of small-bore connectors used for intravenous, feeding, neural, blood pressure cuff, and breathing system tubes to prevent serious adverse events. Center or office funding project: CDRH Priority area(s) covered: Manufacturing and quality Funding obligations and time period of funding: FDA could not determine funding as the research was conducted as part of FDA employees’ regular responsibilities. Description of project: Because devices using small bore connectors have been accidentally connected with devices that have different functions and have led to serious adverse events for patients, including deaths, FDA participated in international efforts to standardize connector designs for specific medical applications such that they cannot be interconnected with a device for another medical application. Select achievements reported by FDA: FDA has issued guidance on premarket recommendations for devices that use small-bore connectors intended for use in the gastrointestinal tract. International standards are also being finalized based on this work. Device manufacturers are modifying their devices accordingly. In addition, FDA has developed a website for highlighting relevant information for stakeholders. Title: Created a general testing protocol and test methods for electromagnetic compatibility of automated external defibrillators. Center or office funding project: CDRH Priority area(s) covered: Emerging technologies Funding obligations and time period of funding: FDA could not determine funding as the research was conducted as part of FDA employees’ regular responsibilities. Description of project: A growing number of adverse events and voluntary recalls by manufacturers of automated external defibrillators led FDA to study the effect of electromagnetic interference that had been related to potentially life-threatening failures of this device. Select achievements reported by FDA: As a result of this project, FDA developed test methods to evaluate the susceptibility of automated external defibrillators to radiofrequency interference. FDA recommended an international commission’s standards be modified to account for interference testing at certain frequencies. Title: Physiologically based pharmacokinetic models for Bisphenol A. Center or office funding project: National Center for Toxicological Research (NCTR) Priority area(s) covered: Toxicology Funding obligations and time period of funding: $1.049 million (fiscal years 2010 through 2014) Description of project: Due to concerns about the safety of Bisphenol A, which is used in many consumer plastic products, NCTR developed computational modeling to simulate infant exposure to Bispehnol A to provide FDA with information necessary to complete a safety assessment. Select achievements reported by FDA: The results of the project allowed FDA to predict how much chemical remained after being metabolized and would get into the circulatory system of adults and infants. These models were incorporated into FDA’s updated assessment on Bisphenol A and allowed FDA’s Center for Food Safety and Nutrition, as well as other regulatory bodies, to determine that current uses of Bisphenol A are safe for infants and adults and, further, led the agency to conclude that the traditional safety assessment methods used were overly conservative. Title: Development of liver toxicity knowledge base to empower the FDA review process. Center or office funding project: NCTR Priority area(s) covered: Toxicology and information sciences Funding obligations and time period of funding: $3.426 million (fiscal years 2010 through 2013) Description of project: Drug-induced liver injury is a serious safety concern that is a frequent cause of denied approvals and “black box” warnings on drugs. As a result, FDA was interested in developing a database to improve its understanding and prediction of such liver injury. Select achievements reported by FDA: The project has produced a centralized resource of data and predictive models that are useful for both research and regulation. NCTR has trained CDER reviewers to effectively use the software from this database. NCTR also has received requests from CDER for five consultations to assess the risk of products that it has reviewed and to incorporate the software into training for new reviewers. An extension of the project is also currently under review. Title: Assessing the feasibility of using electronic health record systems to conduct near real-time monitoring of health outcomes, including serious or unexpected adverse events associated with medical countermeasures used during public health emergencies. Center or office funding project: Office of Counterterrorism and Emerging Threats (OCET) Priority area(s) covered: Medical countermeasures Funding obligations and time period of funding: $1.419 million (fiscal year 2013) Description of project: The project was designed as a proof-of- concept feasibility study to learn whether it is possible to extract adverse event data from electronic health records and, if so, whether those data would provide useful information about the safety and effectiveness of medical countermeasures during a public health emergency. Select achievements reported by FDA: FDA learned that electronic health records data could help inform a risk assessment of medical countermeasures based on adverse events; however, there are limits to what can be done. For example, they found it was feasible to detect severe adverse events, but that less severe adverse events were likely to be underreported. Title: Sentinel initiative for surveillance of drugs, vaccines, and blood products used to prevent and treat pandemic influenza. Center or office funding project: OCET Priority area(s) covered: Medical countermeasures Funding obligations and time period of funding: $9.1 million (fiscal year 2011) Description of project: Sentinel is FDA’s system for conducting near real-time active safety surveillance of FDA-regulated medical products through routinely collected electronic healthcare data. This project was designed to expand those capabilities to include preparedness for safety surveillance in response to the use of medical countermeasures, including for pandemic influenza. Select achievements reported by FDA: This project created the capability for FDA to monitor the safety of medical countermeasures— for example, influenza vaccines used during an emergency, such as a pandemic. It also increased the efficiency of linkages between registries of immunization and the Sentinel database, which FDA says is vital during pandemics. Title: Collecting spectral information of foods, pharmaceutical ingredients, and formulated product (as well as its packaging materials) to establish a comprehensive spectral library accessible through the internet. Center or office funding project: Office of International Programs (OIP) Priority area(s) covered: Information sciences Funding obligations and time period of funding: $0.314 million (fiscal year 2013) Description of project: The primary goal of this project was to develop a roadmap for the development of a global library that could potentially be used to protect consumers against fraudulent and adulterated products. Select achievements reported by FDA: A roadmap was produced from this project and FDA intends to use it in discussions with domestic and foreign stakeholders, including other regulatory agencies and manufacturers. Title: Explored the potential for mining social media and other web sources to detect adverse event and safety signals. Center or office funding project: Office of Regulatory Science and Innovation (ORSI) Priority area(s) covered: Information sciences Funding obligations and time period of funding: $0.801 million (fiscal years 2012 through 2014) Description of project: This study evaluated the validity and trustworthiness of using a social media data mining tool to detect drug safety events. It was also designed to evaluate the potential of social media data to provide early signals for drug safety events in postmarketing surveillance and to better understand how these data can be used. Select achievements reported by FDA: FDA initially reported that the data mining tool suggested that user-generated data sources may identify signals not found in the traditional voluntary reporting systems and that there was agreement between data obtained from this tool and that obtained from the FDA Adverse Events Reporting System. FDA conducted an additional study to evaluate if there was evidence in social media for 10 recent MedWatch Safety Alerts prior to FDA becoming aware of them. FDA has since reported that monitoring social media did not provide early safety concerns for the medical products that were monitored in the study. The analysis identified specific limitations of FDA’s tool that was used in monitoring safety concerns. These limitations related to the natural variability of data sources and the difficulties in conducting accurate evaluation of the data. FDA noted that as monitoring social media for safety concerns is a new approach, the agency still needs to establish best practices in order to use it effectively. Title: Consortium for tuberculosis biomarkers. Center or office funding project: ORSI Priority area(s) covered: Clinical evaluations and personalized medicine Funding obligations and time period of funding: $1.425 million (fiscal years 2010 through 2012) Description of project: This project had three main objectives: 1) create protocols, processes, and standards by which a consortium for tuberculosis biomarkers—composed of three organizations central to tuberculosis clinical drug development—would operate; 2) create a repository for receiving, storing, and shipping samples to designated investigators; and 3) establish a peer review panel to review proposals related to the discovery and qualification of tuberculosis biomarkers, especially some surrogate markers. Select achievements reported by FDA: The consortium established protocols for key data elements to be gathered; a consensus set of operating procedures for sample collection, processing, and storage; and quality assurance and monitoring for these activities. The consortium also adopted a peer review process to review applications for access to samples. Title: Sex-based differences in the molecular mechanisms of polymer degradation in drug eluting stents. Center or office funding project: Office of Women’s Health (OWH) Priority area(s) covered: Clinical evaluations and personalized medicine and manufacturing and quality Funding obligations and time period of funding: $0.2 million (fiscal years 2011 through2012) Description of project: This study explored the different breakdown of materials used in biodegradable stents and examined the potential effect of sex on the degradation of these materials. Select achievements reported by FDA: FDA reported that the study found that stent material breakdown varied in different tissues. The findings provide information that can guide FDA and the larger community in the design, manufacture, and evaluation of absorbable stents. FDA reported that this research will support the development of guidance for implants containing certain absorbable components. Marcia Crosse, (202) 512-7114 or crossem@gao.gov. In addition to the contact name above, William Hadley, Assistant Director; Carolyn Garvey; Sandra George; Cathleen Hamann; Carolyn Feis Korman; and Deborah Linares made key contributions to this report. | FDA has faced challenges regulating medical products, owing in part to rapid changes in science and technology. In 2010, FDA established a regulatory science initiative that identified eight priority areas for medical products where new research was needed to advance its mission. Legislation enacted in 2012 required FDA to establish a plan for measuring its progress on its regulatory science efforts. GAO was asked to examine FDA's progress on its regulatory science efforts related to medical products. In this report, GAO (1) evaluates FDA's strategic planning efforts to address its regulatory science priorities, (2) describes FDA's funding targeted at regulatory science projects, and (3) describes the achievements of selected FDA regulatory science projects. GAO compared related FDA strategic planning documents to federal internal control standards and leading practices for strategic planning. GAO reviewed FDA data on obligations targeted at regulatory science projects for fiscal years 2010 through 2014 and reviewed the achievements FDA reported from a sample of 17 projects, chosen to ensure nine FDA centers and offices and priority areas are represented. The Food and Drug Administration (FDA) lacks measurable goals to assess its progress in advancing regulatory science—the science supporting its effort to assess the products it regulates. The agency issued strategic planning documents in 2011 and 2013 to guide its regulatory science efforts and identify priority areas for conducting work, but these documents do not specify the targets and time frames necessary for the agency to measure progress overall or within each of the eight priority areas related to medical products. According to leading practices for strategic planning, identifying and using consistent measurable goals in planning and progress documents is important to assessing effectiveness. While FDA cited examples of its achievements in regulatory science in a 2015 report, FDA cannot assess how those achievements constitute progress towards its goals. In addition, FDA lacks information about how funding targeted at regulatory science is distributed across the priority areas. Decisions to award these funds are made by individual FDA centers and offices, which generally did not collect information on the associated priority areas of funded projects. Rather, FDA retrospectively identified these areas for the purpose of GAO's review. The lack of consistent information limits FDA's ability to examine obligations across, or progress within, specific priority areas. Standards for internal control in the federal government state that complete and accurate data are needed to make operating decisions and allocate resources. Furthermore, multiple centers or offices fund projects toward a given priority area and leading practices for strategic planning encourage agencies to manage efforts that cut across the agency. For the 17 regulatory science projects GAO reviewed, FDA identified achievements ranging from the dissemination of project findings to changes in both agency and external stakeholder practices. For example, FDA reported that all projects resulted in some type of change within FDA. About half of the projects resulted in the agency developing standards, methods, tools, or training that it could use internally, and about one-third of the projects affected guidance or regulations. FDA also reported that about half of the projects resulted in the development of new tools or standards for use by industry or other stakeholders, in areas such as setting new standards for defibrillators to account for radio interference. To improve strategic planning for regulatory science efforts, FDA should (1) develop and document measurable goals, including targets and time frames, and (2) systematically track funding across its regulatory science priority areas. The Department of Health and Human Services agreed with GAO's recommendations. |
The Los Alamos National Laboratory, located in New Mexico, is charged with enhancing the security of nuclear weapons and nuclear materials worldwide. On Thursday, May 4, 2000, Bandelier National Monument workers in the Cerro Grande Mountain area set fire to a portion of the monument’s land to thin uncontrolled forest growth. The fire rapidly grew out of control, and during the 2-week period that followed, over 47,000 acres of national forest, county, pueblo, and laboratory land burned. The laboratory later reported that 8,000 acres of its land had been damaged, 39 structures had been destroyed, and almost $130 million in fire-related costs had been incurred. The laboratory was officially closed from May 8 until May 22 but, according to Los Alamos officials, remained in a state of emergency because of damage caused by the fire and the threat of flooding until August 2000. After the fire, the laboratory’s Cerro Grande Rehabilitation Project office contacted divisions that had lost equipment in the fire and required that they submit detailed lists of their losses to obtain the release of fire recovery funds from DOE. Seven divisions indicated that they needed a total of $13.2 million in fiscal year 2000 and $15 million in subsequent years to recover from the fire. Each division provided information on the equipment that had been damaged or destroyed by the fire, the estimated cost of replacement equipment, and the actual cost of replacement equipment that had already been purchased. The equipment needing replacement included desktop and laptop computers, printers, cameras, office furniture, scientific equipment, and related supplies. The laboratory, in general, purchases equipment using several procurement methods. Each method is intended to obtain goods and services at the lowest cost, taking into account the cost of procurement administration. One such method is the laboratory’s just-in-time subcontracting program. This program, according to laboratory officials, allows personnel to obtain products from prequalified suppliers at discounted prices, usually within 24 hours of order placement. Orders and payments are processed electronically, thereby eliminating the need for involvement from the procurement staff. Other procurement methods used by the laboratory include the purchase card program, wherein a credit card is used, and purchase orders. Through the purchase card program, laboratory personnel may order supplies and equipment through the Internet or other available sources of supply. Laboratory officials told us that the laboratory does not track the total cost of purchases of $25,000 or less made collectively through its just-in-time subcontracting program, purchase card program, and purchase orders. However, during fiscal year 2000, Los Alamos’ procurement staff processed over $46 million in individual purchase orders of $25,000 or less for goods and services, including personal computers, printers, digital cameras, and related equipment and supplies. The DOE Office of Inspector General has issued at least one report on computer acquisitions. Specifically, in 1997, the DOE Office of Inspector General performed an audit of desktop computer acquisitions at the Idaho National Engineering and Environmental Laboratory. The Inspector General’s report indicated that, in order to reduce costs, DOE’s Idaho contractor had formally studied its desktop computer acquisition practices and estimated that establishing a mandatory performance standard for computers would result in millions of dollars in savings per year. On the basis of this study, DOE’s Idaho contractor established a mandatory computer performance standard at the site. The Inspector General reported that DOE’s Idaho contractor could further improve its computer acquisition practices by using alternative supply sources, such as GSA, Small Business Administration contracts, or other desktop computer vendors. We found no similar DOE reviews regarding the acquisition of laptop computers, computer printers, or digital cameras for other DOE sites. The Los Alamos contractor probably could have saved money by expanding its possible supply sources. Our review showed that Los Alamos paid nearly the full retail price or more for many of the items. If Los Alamos had used more supply sources, it could have saved about 25 percent on certain items. Supply sources that could have been used include GSA and more suppliers that advertise over the Internet. Recent literature suggests that using the Internet to expand supply sources and compare prices can produce savings. Los Alamos officials indicated that the laboratory has been using the Internet but acknowledged that more enhancements in Internet procurement were possible. Because of the difficulty in getting detailed price and product information for the time period the purchases were made, we reviewed only 17 items purchased by the laboratory from May through July 2000 (see app. I). We determined the manufacturer’s suggested retail price (retail price) for 12 of the items: 5 desktop computers, 1 laptop computer, 4 printers, and 2 digital cameras. Of the 12 items, Los Alamos received discounts from the suppliers it used on only 5. In five cases, Los Alamos paid nearly the retail price for the items. In two cases, Los Alamos paid more than the retail price. In addition to comparing Los Alamos’ purchase prices with retail prices, we also identified individual suppliers that could have provided certain of the items at a cost below that paid by Los Alamos. For example, Los Alamos could have saved about 25 percent in some cases if it had used other sources. Los Alamos purchased the 17 items from three local New Mexico vendors (one of which was a just-in-time contractor), one Internet vendor, and one computer manufacturer. The laboratory did not attempt to purchase the equipment items through GSA’s Internet shopping site or from other vendors that advertise their equipment over the Internet. Laboratory officials told us that, for the items reviewed, they felt most comfortable dealing with companies they had done business with in the past. Because historical prices for computer and electronic equipment are not readily available, it was difficult to determine what Los Alamos could have paid for all of the 17 items we reviewed if it had used other vendors. We were able, however, to develop price comparisons for 4 of the 17 items: 3 printers and 1 digital camera. The total Los Alamos purchase price for the four items was $2,677, but these items would have cost $2,000 if purchased at that same time from GSA or from suppliers that advertise over the Internet, a savings of 25 percent. Although this sample is small, it shows that expanding supply sources could save money. In commenting on this information, Los Alamos officials said that the most expensive of the four items, a digital camera costing about $1,300 and purchased 8 days after the laboratory reopened, was needed immediately to document the fire damage and was purchased from a local vendor at a discounted price. However, we found that this camera could have been purchased directly from the manufacturer at any time after the fire for about $974 and received within 2 days with no shipping cost. Recent literature suggests that using the Internet to expand supply sources and compare prices can produce savings. For example, according to an article in the November 2000 issue of Public Management, Internet procurement offers a significant opportunity to cut costs, increase organizational effectiveness, and improve customer service. Internet procurement, as described in the article, allows agencies to search for products and services from available suppliers and determine best prices, product availability, and shipping costs. Although Los Alamos used the Internet to make many of its purchases, it did not use it to compare prices from available suppliers. Officials of the laboratory said it has been using the Internet but acknowledged that more enhancements in Internet procurement were possible. Los Alamos contracting officials further said that their contract with DOE encourages but does not require using GSA to purchase equipment and that they did not consider using GSA for their replacement purchases. One Los Alamos procurement assistant who was responsible for procuring many of the equipment items included in our review indicated that she was not aware that GSA had an online shopping site. In response to our review, Los Alamos officials said the laboratory would give greater consideration to using GSA for its future equipment purchases. Specifically, these officials indicated that people who use purchase cards now receive training on how to use GSA Advantage! and will be encouraged to use GSA as an alternative to the laboratory’s just-in- time program when appropriate. The Los Alamos contractor could save money by establishing mandatory performance standards for computer and computer-related equipment. DOE’s contractor at the Department’s Idaho laboratory reported that mandatory standards for computers resulted in cost savings at that laboratory. Neither Idaho nor Los Alamos has developed performance standards for computer printers, digital cameras, or other related equipment. However, consideration of such standards could provide additional opportunities for cost savings. According to the Office of Inspector General’s report on computer acquisitions at DOE’s Idaho laboratory, the contractor there determined that millions of dollars in cost savings were possible if mandatory performance standards for purchasing such equipment were implemented. The computer performance standards in question refer to such things as the speed of the microprocessor, the size of the random access memory, and the size of the hard drive. Before October 1994, DOE’s Idaho laboratory had no sitewide standard to govern the acquisition of desktop computer systems. To address this issue, the laboratory contractor formed a working group consisting of representatives from all laboratory departments to study the situation. The working group developed a specific computer standard and recommended that it be established laboratorywide. Anticipated benefits included, for example, lower computer support costs and fewer training expenses. The laboratory contractor required all departments to comply with the standard. The contractor also adopted and implemented a policy that stipulates, in part, that only the contractor’s information resources management director can approve deviations from the standard. Because DOE’s Idaho contractor reported cost savings at that laboratory, using mandatory performance standards may represent a best practice that could be used by Los Alamos. At Los Alamos, the contractor has developed minimum voluntary performance standards for its desktop and laptop computer acquisitions, but no maximum standards. Also, unlike Idaho, Los Alamos has no requirement that purchases above the standard receive formal management review and approval. According to Los Alamos contracting officials, whenever an employee requests a new computer system, that request is reviewed by a supervisory official, but the review is not formally documented. Of the 17 equipment items we reviewed, 9 were desktop or laptop computers. All nine computers had performance capabilities that exceeded Los Alamos’ minimum voluntary standards. For example, one voluntary standard for laptop computers is having a hard drive of 6.4 gigabytes. All three laptop computers in our sample had hard drives of 12 gigabytes or more. Because there is no requirement to document instances in which capabilities exceed Los Alamos’ voluntary minimum standards, we could not determine if the enhanced performance capabilities and extra cost associated with these laptop computers were justified. Neither Idaho nor Los Alamos has developed performance standards for computer printers, digital cameras, or other related equipment. However, on the basis of our review, such standards may be beneficial. For example, one equipment item we reviewed was a printer for which Los Alamos paid more than $1,400. Because of its unique capabilities, such a printer is normally used to meet the printing needs of a group of individuals connected to the same network server. In this case, however, the printer was being used primarily by one technical staff member and one part-time contractor who was in the office about one-third of the time. Neither individual needed a printer with unique capabilities. Other technical staff members we interviewed had printers for their personal use with lesser speed capabilities that cost between $280 and $700. In addition, we noted that the clarity and resolution of the $700 printer were similar to those of the $1,400 printer, but that the $700 printer had less memory. Printer memory, however, is an issue only when a large number of employees are queuing up for printing simultaneously. The Los Alamos contractor could save money if it increased its use of a standard brand of computer and computer-related equipment. DOE’s contractor at its Idaho laboratory determined that it could achieve considerable cost savings by limiting the various brands and models of desktop computers it purchased. Because of these reported cost savings, such limitations may be a best practice that could be used by Los Alamos. In contrast to Idaho, Los Alamos generally allows various brands and models of the same equipment to be purchased. Before 1995, according to a report by DOE’s contractor at its Idaho laboratory, that laboratory allowed many different computer systems to be purchased. The contractor’s report indicated that this had created a range of problems: higher costs for maintenance, support, and training; difficulties in communicating through electronic messaging and using shared files; and problems in operating among work platforms and programs. Therefore, when Idaho established its standard for desktop computers, the contractor took the standard one step further and charged its procurement division with selecting a computer model that, on the basis of cost, reliability, serviceability, and other factors, would be in compliance with the standard. A cost-benefit analysis showed that cost savings ranging from $5 million to $10 million could be achieved over 10 years if the proposed standard was implemented. Subsequently, the procurement division awarded a contract to a single vendor to provide one specific brand of network and laptop computers and one specific brand of desktop computers. At Los Alamos, in general, no similar limitations on desktop and laptop computer acquisitions exist. As a result, the contractor can purchase different brands and models of computers. For instance, the six desktop computers we reviewed were all different brands or models, and the three laptop computers were all different brands. These computers can also vary in price. For example, one replacement desktop computer cost about $2,900, while a different brand computer with enhanced capabilities cost about $2,600. According to Los Alamos contracting officials, the laboratory’s employees had different brands and models of equipment before the fire. The items purchased were intended to be nearly identical replacements for the ones that had been destroyed by the fire. Los Alamos officials also told us that uniformity in computers across the entire laboratory would not meet the needs of the diverse applications and functions involved in experimental work. These officials indicated, however, that a certain number of the laboratory’s more than 40 organizations have begun using a standard brand of computer to meet their specific requirements. We determined that two Los Alamos divisions—Business Operations and Facility and Waste Operations—-have begun using a standard brand of computers and it has dramatically reduced support costs. However, Los Alamos has not formally evaluated the feasibility of adopting this approach for more of its organizations. While the scope of our review was limited, it raised the possibility that significant savings could be realized at Los Alamos by adopting revised procurement practices. If Los Alamos expanded its use of the Internet and, thereby, considered a broader spectrum of supply sources, including GSA, significant savings could be possible. Additional savings might also be possible if Los Alamos adopted the best practices being reported at Idaho. For example, if Los Alamos established mandatory performance standards for computer and computer-related equipment purchases, savings could probably be realized by avoiding purchasing higher-priced equipment that exceeds the needed capabilities. Furthermore, if Los Alamos limited the number of brands and models of the same equipment it purchased as at Idaho, savings could be realized from volume discounts associated with making multiple purchases of the same equipment item and from lower support costs. DOE’s Idaho contractor reported that these practices have or likely will result in cost savings. To improve the economy of equipment purchases at the Los Alamos National Laboratory, we recommend that you direct the contractor at Los Alamos to develop policies and procedures that encourage greater consideration of additional supply sources, including GSA and suppliers that advertise over the Internet; establish, to the extent practicable, mandatory performance standards for computer and computer-related equipment; and evaluate, in light of the reported savings at two Los Alamos divisions, the feasibility of having more of its organizations use a standard brand of computer and computer-related equipment. We provided a draft copy of this report to DOE for its review and comment. DOE stated that the overall finding of potential cost saving opportunities and the three associated recommendations contained in the report merit additional management attention. DOE indicated that it was directing Los Alamos to undertake specific actions in response to each of the recommendations. While generally agreeing with our recommendations, DOE pointed out that most of the procurements in question were made during a regional disaster, and that DOE places a high value on supporting regional socioeconomic development. In addition, DOE stressed in its comments that best value includes aspects other than lowest possible advertised cost. Further, DOE indicated that mandatory performance standards for computer and computer-related equipment could potentially affect programmatic or mission requirements. We believe that adopting our recommendations will not adversely affect DOE’s ability to purchase equipment during an emergency, promote regional development, or achieve the best value. We also believe that mandatory performance standards for computer and computer-related equipment should be flexible enough to allow exceptions, but that those exceptions should be formally reviewed. DOE’s complete comments are presented in appendix III. We performed our work at DOE’s headquarters and Los Alamos from August 2000 through March 2001 in accordance with generally accepted government auditing standards. Additional information on the scope and methodology of our review is presented in appendix II. We are sending copies of this report to interested congressional committees and subcommittees and to the Director, Office of Management and Budget. We will also make copies available to others on request. Price was not available. To determine whether supplemental funding was being spent in the most economical fashion, we randomly selected 17 items of replacement equipment that had already been purchased for further review. Of the 17 selected items, 6 were different brands or models of desktop computers, 3 were different brands of laptop computers, 6 were different brands or models of printers, and 2 were different models of digital cameras. For each item, we requested a report from Los Alamos’ property management system regarding the item and the item’s purchase invoice. We used this information to determine the performance specifications, procurement source, and price paid for each item. We also, to the extent possible, examined each item and interviewed the employee to whom each item had been assigned. Through this process, we were able to determine the exact configuration of each item, including its peripherals and options. Further, we independently attempted to determine if each item could have been procured at a lower price using a supply source other than that used by the laboratory, such as GSA’s Federal Supply Schedule and private companies that offer their equipment for sale over the Internet. We also obtained from Los Alamos contracting officials information on the laboratory’s requirements regarding equipment purchases. This information included a copy of the current DOE contract with the University of California, applicable DOE acquisition regulations, and laboratory policies and procedures pertaining to purchasing computer and computer-related equipment and using GSA for equipment purchases. In addition, we searched for DOE reports on the procurement of computer equipment by DOE contractors and found a 1997 Office of Inspector General audit on desktop acquisitions at the Idaho National Engineering and Environmental Laboratory. We found no other DOE reports on the acquisition of computers or computer-related equipment. Finally, we researched available literature for information on the advantages and disadvantages of Internet procurement. We performed our work from August 2000 to March 2001 in accordance with generally accepted government auditing standards. | The Department of Energy (DOE) received $13.2 million in supplemental funding to replace equipment lost at the Los Alamos National Laboratory in the May 2000 Cerro Grande fire. GAO reviewed the practices used by the contractor that runs the laboratory--the University of California (UC)--to determine whether it can benefit from modified purchasing practices. GAO found that UC can save money by (1) expanding its supply sources to include suppliers such as the General Services Administration and the Internet; (2) establishing mandatory maximum performance standards for computer purchases to avoid unjustified, costly, and unnecessary capabilities; and (3) increasing its use of a standard brand of computer and computer-related equipment to maximize volume discounts with selected suppliers. |
In 1936, following the enactment of the Social Security Act of 1935, the newly-created Social Security Board (which later became SSA) created the 9-digit SSN to uniquely identify and determine Social Security benefit entitlement levels for U.S. workers. SSA uses a process known as “enumeration” to create and assign unique SSNs for every eligible person as part of their work and retirement benefit record. As of September 2016, SSA had issued approximately 496 million unique SSNs to eligible individuals. Originally, the SSN was not intended to serve as a personal identifier outside of SSA’s programs but, due to its universality and uniqueness, government agencies and private sector entities now use the SSN as a convenient means of identifying people. The SSN uniquely links an identity across a very broad array of public and private sector information systems. The expansion of government use of the SSN began with Executive Order 9397, issued by President Franklin D. Roosevelt in 1943. This required all federal agencies to use the SSN exclusively for identification systems of individuals. Since Executive Order 9397 was issued, additional federal statutes have authorized or mandated the collection or use of SSNs for a wide variety of specific government activities. Table 1 lists examples of such statutes. These and other laws and regulations have dramatically increased the extent to which the government collects and uses SSNs as a unique record identifier to determine an individual’s eligibility for government services and benefits. For example, CMS (a component of HHS) collects SSNs from approximately 57.7 million U.S. citizens or residents and displays them on Medicare enrollment cards. Other agencies collect SSNs for purposes such as federal employment (hiring, pay, and benefits), loans and other personal benefits, criminal law enforcement, statistical and other research purposes, and tax purposes. Figure 1 shows the extent to which the 24 federal agencies covered by the CFO Act reported collecting and using SSNs for different purposes, based on responses to our questionnaire. Requirements for protecting the privacy and security of SSNs in the federal government are derived from the provisions of laws that govern the collection and use of PII. Generally, these laws require agencies to notify the public of any such collection, collect only the information that is necessary to accomplish an agency’s purpose, and perform privacy impact assessments for systems that collect, use, and store PII. Among others, three key laws establish governmentwide privacy and security protections: the Privacy Act of 1974, the E-Government Act of 2002, and the Federal Information Security Modernization Act of 2014 (FISMA). The Privacy Act of 1974 requires that agencies maintain only those records containing PII that are “relevant and necessary” to accomplish agency purposes. The act describes a record as any item, collection, or grouping of information about an individual that is maintained by an agency and contains his or her name or other personal identifier. The act defines a “system of records” as a group of records under the control of any agency from which information is retrieved by the name of the individual or by an individual identifier. Section 7 of the act requires that any federal, state, or local government agency, when requesting an SSN from an individual, provide that individual with three key pieces of information. Government entities must (1) tell individuals whether disclosing their SSNs is mandatory or voluntary; (2) cite the statutory or other authority under which the request is being made; and (3) state what uses the government will make of the individual’s SSN. OMB has issued detailed guidance on implementing the act. The E-Government Act of 2002 requires agencies to conduct privacy impact assessments before developing or procuring information technology that collects, maintains, or disseminates information that is in identifiable form (such as SSNs). According to OMB guidance, a privacy impact assessment is an analysis of how information is handled to (1) ensure handling conforms to applicable legal, regulatory, and policy requirements regarding privacy; (2) determine the risks and effects of collecting, maintaining, and disseminating information in identifiable form in an electronic information system; and (3) examine and evaluate protections and alternative processes for handling information to mitigate potential privacy risks. FISMA sets requirements for safeguarding the confidentiality, integrity, and availability of information collected and used by federal agencies. It requires each agency to develop, document, and implement an agencywide information security program to provide security for the information and information systems that support operations and assets of an agency, including those provided or managed by another agency, contractor, or another organization on behalf of an agency. FISMA requires agencies to submit an annual report to OMB, congressional committees, and GAO on the adequacy and effectiveness of their information security policies, procedures, and practices. OMB is responsible for developing guidelines, providing assistance, and overseeing agencies’ implementation of the three acts. For example, OMB has issued guidance on the specifics of what agencies should include in their annual FISMA reports. OMB has also issued guidance on other information security and privacy-related issues including federal agency website privacy policies, interagency sharing of personal information, designation of senior staff responsible for privacy, data breach response and notification, and safeguarding PII. In 2006, the President issued an Executive Order establishing the Identity Theft Task Force to strengthen efforts to protect against identity theft. The task force was directed to review the activities of executive branch departments, agencies, and instrumentalities relating to identity theft, and prepare and submit to the President a coordinated strategic plan to further improve the effectiveness and efficiency of the federal government’s activities in the areas of identity theft awareness, prevention, detection, and prosecution. Because the unauthorized use of SSNs was recognized as a key element of identity theft, the task force assessed actions the government could take to reduce the exposure of SSNs to potential compromise. It issued a series of reports beginning with interim recommendations in 2006 that called for OPM and OMB to take steps to survey the collection and use of SSNs and take steps to eliminate, restrict, or conceal their use. In April 2007, the task force issued a strategic plan, which advocated a unified federal approach, or standard, for using and displaying SSNs. The plan proposed that OPM and OMB play key roles in restricting the unnecessary use of SSNs, offering guidance on substitutes that are less valuable to identity thieves, and promoting consistency when the use of SSNs was found to be necessary or unavoidable. The task force’s 2007 plan recommended the following key actions to reduce the unnecessary use of SSNs within the federal government: Issue Guidance on Appropriate Use of SSNs. The task force recommended that OPM issue policy guidance to the federal human capital management community on the appropriate and inappropriate use of SSNs in employee records, including the appropriate way to restrict, conceal, or mask SSNs in employee records and human resource management information systems. Complete Review of Use of SSNs. Based on a survey of uses of SSNs in federal personnel forms and records that was conducted in 2006, the task force recommended that OPM take steps to eliminate, restrict, or conceal the use of SSNs, including by assigning alternate employee identification numbers where practicable. Require Agencies to Review Use of SSNs. Noting that OMB was in the process of surveying agencies on their use of SSNs, the task force recommended that OMB complete an analysis of the surveys to determine the circumstances under which such use could be eliminated, restricted, or concealed in agency business processes, systems, and paper and electronic forms. Establish a Clearinghouse for Agency Practices that Minimize Use of SSNs. The task force recommended that SSA develop a clearinghouse for agency practices and initiatives that minimize the use and display of SSNs to facilitate the sharing of best practices— including the development of any alternative strategies for identity management—to avoid duplication of effort, and to promote interagency collaboration in the development of more effective measures. An update to the plan was issued in September 2008, which offered updates on its previously issued recommendations. Data breaches—including the unauthorized use and disclosure of PII such as SSNs—pose a persistent threat to government operations and the personal privacy of affected individuals. Thousands of information security incidents involving PII occur every year. For example, in fiscal year 2016, federal agencies reported 8,233 data breaches involving PII to the U.S. Computer Emergency Readiness Team. The following are examples of attacks resulting in the loss or compromise of SSNs and other PII: In June 2015, OPM reported that an intrusion into its systems had compromised the personnel records of about 4.2 million current and former federal employees. Then, in July 2015, the agency reported that a separate but related incident had compromised background investigation files on 21.5 million individuals. Background investigation files contain a variety of PII, including SSNs, names, addresses, and references. In June 2015, the Commissioner of the IRS testified that unauthorized third parties had gained access to taxpayer information from its Get Transcript service. According to officials, criminals used taxpayer- specific data acquired from non-agency sources to gain unauthorized access to information on approximately 724,000 accounts. These data included SSNs, dates of birth, street addresses, and wage and withholding information. In July 2013, the Department of Energy reported that hackers had stolen a variety of PII on more than 104,000 individuals from an agency information system. Types of data stolen included SSNs, birth dates and locations, bank account numbers, and security questions and answers. In May 2012, the Federal Retirement Thrift Investment Board reported a sophisticated cyberattack on the computer of a contractor who provided services to the Thrift Savings Plan. As a result of the attack, PII associated with approximately 123,000 plan participants was accessed, including individuals’ names and SSNs. Since 2006, we have issued several reports and testimonies underscoring the widespread use of SSNs in the federal government and highlighting steps that can be taken to minimize their use and display. In March 2006, we testified that SSN use was widespread in both the public and private sectors. We stated that although laws were in place at both the state and federal levels to restrict the display of SSNs and protect individuals’ personal information, shortcomings remained, such as a lack of uniformity at all levels of government to assure the security of SSNs; gaps in the federal law and oversight in different industries that share SSNs with their contractors; and the exposure of SSNs in public records and government identification cards. In May 2006, we reported that few federal laws and no specific industry standards specified whether to display the first five or last four digits of an SSN. We recommended that Congress consider enacting standards for truncating SSNs or delegating authority to SSA or some other government entity to issue standards for truncating SSNs. In 2009, two laws were introduced that addressed standards for truncating SSNs. In June 2007, we reported that IRS and the Department of Justice were the only federal agencies that commonly provided records containing SSNs to state and local public record keepers and that both had taken steps to truncate or remove SSNs in those records. We also noted that both full and truncated SSNs in federally generated public records remained vulnerable to potential misuse, in part because different truncation methods used by the public and private sectors could enable the reconstruction of full SSNs. We recommended that the Commissioner of IRS implement a policy requiring the truncation of all SSNs in lien releases the agency generated and that the Attorney General implement a policy requiring, at a minimum, SSN truncation in all lien records generated by its judicial districts. The agencies implemented both recommendations. In September 2013, we reported that CMS had not taken needed steps to select and implement a technical solution for removing SSNs from Medicare cards. We recommended that the agency initiate an IT project to identify, develop, and implement changes to CMS’s affected systems, including assessing proposed approaches for the removal of SSNs from Medicare beneficiaries’ cards. While CMS has initiated such a project, SSNs have not yet been removed from Medicare cards, as discussed later in this report. In response to the recommendations of the Identity Theft Task Force, OPM, OMB, and SSA undertook several actions aimed at reducing or eliminating the unnecessary collection, use, and display of SSNs. However, these actions have had limited success. OPM published a draft regulation to limit federal collection, use, and display of SSNs but withdrew the proposed rule because no alternate federal employee identifier was available that would provide the same utility as SSNs. OMB and SSA also took steps to facilitate reduction in federal SSN collection and use. OMB began requiring agency reporting on SSN reduction efforts as part of the annual FISMA reporting process. In addition, SSA developed an online clearinghouse of best practices; however, this clearinghouse is no longer available, and SSA has no records of when or why the site was discontinued. In April 2007, the Identity Theft Task Force recommended that OPM issue policy guidance to the federal human capital management community on the appropriate and inappropriate uses of SSNs in employee records, including the appropriate way to restrict, conceal, or mask SSNs in employee records and human resource management information systems. The task force also recommended that OPM identify steps to eliminate, restrict, or conceal the use of SSNs, including by developing and assigning alternate employee identification numbers where practicable. OPM took several actions in response to the task force recommendations. Using an inventory of its forms, procedures, and systems displaying SSNs that it had developed in 2006, the agency took action to change, eliminate, or mask the use of SSNs on OPM approved/authorized forms, which are used by agencies across the government for personnel records. In addition, in 2007, OPM issued guidance to other federal agencies on actions they should take to protect federal employee SSNs and combat identity theft. The guidance reminded agencies of existing federal regulations that restricted the collection and use of SSNs and also specified additional measures, such as eliminating the unnecessary display of SSNs on forms, reports, and computer display screens; ensuring that individuals with authorized access to SSNs understand their responsibilities for protecting them; and ensuring that electronic records containing SSNs are transmitted or transported in an encrypted or protected format. In addition to issuing this guidance, OPM explored options for establishing a new employee identifier to replace SSNs within the government for human resource and payroll systems. In January 2008, the agency proposed a new regulation regarding the collection, use, and display of SSNs that would have codified the practices outlined in its 2007 guidance and that also required the use of an alternate identifier. Specifically, the proposed rule would have required agencies to: collect SSNs from an employee only once, at the time of the employee’s appointment to a federal position, for entry into human resources and payroll systems; not use the SSN as an employee’s primary identifier in internal or external data processing activities; ensure that SSNs are not printed or displayed on computer display restrict access to SSNs to those individuals whose official duties require such access; and ensure that access to SSNs, including access involving data entry, printing, and screen displays, occurs in a protected location to guard against exposure. However, in January 2010, after reviewing comments it had received, OPM withdrew the notice of proposed rulemaking because the agency determined that it would be impractical to issue the rule without an alternate governmentwide employee identifier in place. In withdrawing the proposed rule, OPM explained that the comments it had received cited numerous information systems and business practices, both internal and external to the government, which used the SSN as a primary identifier. Without a viable alternate identifier in place, OPM said it would be impractical to modify or stop using these systems. With the onset of the efforts to reduce the collection and use of SSNs, OPM asserted that a new unique employee identifier would be an important tool in combating the problem of identity theft in the federal government, and it focused on creating such an identifier. However, after its proposed rule was withdrawn in 2010, the agency stopped working on the project. Officials from OPM’s Office of the Chief Information Officer stated that no government-wide initiative to develop such an identifier has been undertaken since that time. Instead, in 2015 OPM briefly began exploring the concept of developing and using multiple alternate identifiers for different programs and agencies. As envisioned by OPM, the unique identifier for each program would be linked to an SSN, but the SSN and the link would be protected and not used by agency systems and personnel on an everyday basis. Ideally, an SSN would be collected only once, at the start of an employee’s service, after which unique identifiers specific to relevant programs, such as healthcare benefits or training, would be assigned as needed. However, work on the initiative was suspended in 2016 due to the lack of funding. OMB staff subsequently stated that, while they endorse the concept of developing and using alternate identifiers, they had not had a chance to review OPM’s specific proposal. The Identity Theft Task Force recommended that OMB require agencies to review their use of SSNs to determine the circumstances under which such use could be eliminated, restricted, or concealed in agency business processes, systems, and paper and electronic forms. In its April 2007 plan, the task force noted that OMB was in the process of surveying agencies on their use of SSNs and should complete its review sometime in 2007. In May 2007, OMB issued a memorandum officially requiring agencies to review their use of SSNs in agency systems and programs to identify instances in which the collection or use of SSNs was superfluous. Agencies were also required to establish a plan, within 120 days from the date of the memorandum, to eliminate the unnecessary collection and use of SSNs within 18 months. Lastly, the memorandum required agencies to participate in governmentwide efforts, such as surveys and data calls, to explore alternatives to SSN use as a personal identifier for both federal employees and in federal programs. In 2016, OMB issued a revision to its Circular A-130 that reiterated its direction to agencies to take steps to eliminate unnecessary collection, maintenance, and use of SSNs and explore alternatives to the use of SSNs as a personal identifier. Since issuing its May 2007 memorandum requiring the development of SSN reduction plans, OMB annually has instructed agencies to submit updates to their plans and documentation of their progress in eliminating unnecessary use of SSNs as part of their annual FISMA reports. In 2016, questions were added to the FISMA reporting instructions, directing agencies to report: whether they had a written inventory of their collection and use of whether they had developed and implemented a written policy or procedure to ensure that any new collection or use of SSNs was necessary or whether any ongoing collection remained necessary; and whether they had developed and implemented a written policy or procedure to ensure that any collection or use of SSNs associated with agency websites, online forms, mobile applications, and other digital services, was necessary and complied with applicable privacy and security requirements. The Identity Theft Task Force recommended that, based on the results of OMB’s review of agency practices on the use of SSNs, SSA should establish a clearinghouse of agency practices and initiatives that minimize the use and display of SSNs. The purpose of the clearinghouse was to facilitate the sharing of “best” practices—including the development of any alternative strategies for identity management—to avoid duplication of effort, and to promote interagency collaboration in the development of more effective measures for minimizing the use and display of SSNs. In 2007, SSA formed the Social Security Number Collaborative as a forum for interagency meetings to jointly review and share best practices for minimizing the use of SSNs, explore possible alternatives to their use, and establish a medium for ongoing sharing of best practices and continuous improvement. The Collaborative included representatives from 36 agencies and met regularly in 2007. The same year, SSA established a clearinghouse on an electronic bulletin board website to share materials regarding agency efforts to minimize the use and display of SSNs. The clearinghouse showcased best practices and provided agency contacts for specific programs and initiatives. According to officials in the Office of the Deputy Commissioner, the Collaborative has not met since 2007 and the clearinghouse is no longer active. The officials added that SSA did not maintain any record of the extent to which the clearinghouse was accessed or used by other agencies when it was available online. Further, the officials said SSA has no records of when or why the site was discontinued. In their responses to our questionnaire on SSN reduction efforts, the 24 CFO Act agencies reported successfully curtailing the collection, use, and display of SSNs, thereby reducing individuals’ exposure to the risk of identity theft. Nevertheless, all of these agencies continue to rely on SSNs for important government programs and systems, and they have cited challenges to further reduction of SSN collection, use, and display. Moreover, poor planning by many of the 24 agencies and ineffective oversight by OMB have limited SSN reduction efforts. Most of the agencies’ reduction plans lacked key elements, limiting their usefulness, and not all agencies maintained an up-to-date inventory of their SSN collections. Also, definitions of “unnecessary” collection and use have been inconsistent across the 24 agencies. Further, OMB’s monitoring of agency progress has been ineffective in that it has not ensured that agencies have provided up-to-date status information about their reduction efforts or established performance metrics to assess agency progress. Without a more rigorous monitoring process, it will remain difficult for OMB to determine whether agencies have eliminated all unnecessary collection, use, and display of SSNs and thus whether they have taken all reasonable steps to reduce the risk that individuals could become victims of identity theft due to their SSNs being exposed. Based on responses to our questionnaire, all of the 24 CFO Act agencies reported having taken steps to reduce the unnecessary collection, use, and display of SSNs. Examples of activities agencies undertook include developing and using alternate identifiers, removing SSNs from printed forms and other physical displays, and filtering e-mail to prevent unencrypted transmittal of SSNs. Agencies also generally reported that they have processes in place to review ongoing collection, use, and display of SSNs. Developing and Using Alternate Identifiers Officials from four agencies reported that they had transitioned, or were transitioning, to the use and display of alternate identifiers or the use of alternate identification procedures for specific programs and activities. In these cases, the use of alternate identifiers or identification procedures has eliminated the need to display SSNs on identification cards or use them for identification purposes. Specifically: In 2012, DOD issued a department-wide policy to reduce or eliminate the use of SSNs wherever possible. In a number of cases, the department was able to replace SSN use by substituting its 10-digit identification number, a number that is randomly generated for every person by the department’s personnel system. For example, DOD reported that its identification cards, which as of March 2017 were being used by 11 million individuals, now display the DOD identification number rather than an SSN. In addition, based on departmental policy, in November 2015, the Department of the Army began replacing SSNs on soldiers’ dog tags with DOD identification numbers. The Army reported that several information systems had to be modified to use the identification number instead of the SSN. In 2013, the Veterans Health Administration (VHA) within VA removed SSNs from veteran health identification cards, which VHA issues to veterans when they enroll in health care. VHA developed its own integration control number (ICN) as a unique identifier in 1998 and began using it on veteran health identification cards in 2004. Nevertheless, those cards continued to include an individual’s SSN on the barcode and magnetic stripe. Beginning in 2013, VHA issued redesigned cards that display the DOD identification number rather than the ICN. The ICN is still included on the card’s barcode and magnetic stripe and now serves as the primary patient identifier; however, SSNs are no longer included on the cards in any form. VA’s two other major components (the National Cemetery Administration and the Veterans Benefit Administration) also currently use the ICN. The department is in the process of transitioning the remainder of the agency to the ICN, as well. CMS (a component of HHS) recently began taking steps to remove SSNs from Medicare cards. We reported in 2012 that Medicare cards displayed an SSN as part of the health insurance claim number that appeared on the card. While CMS had identified various options for removing the SSN from Medicare cards, the agency had not committed to a plan for such removal. However, the Medicare Access and CHIP Reauthorization Act of 2015 subsequently required CMS to remove SSNs from all Medicare cards and distribute replacement cards with a new Medicare beneficiary identifier by April 2019. CMS officials stated that the agency plans to begin removing SSNs from Medicare cards and replacing them with the new identifier starting in April 2018. In 2015, the Department of Education’s Federal Student Aid office changed login procedures for students, parents, and borrowers, by introducing a federal student aid username and password to be used in place of previous login procedures that relied on a personal identification number associated with the user’s name, SSN, and date of birth. Education officials from the Office of the Chief Privacy Officer reported that, since being introduced, the usernames and passwords have been used over 300 million times to log in to office systems, greatly reducing the exposure of SSNs and other PII. Removing SSNs from Printed Forms and Other Physical Displays Even when SSNs continue to be used as identifiers within internal information systems, the 24 CFO Act agencies reported taking steps to mask, truncate, or block the display of these numbers on paper forms, correspondence, and computer screens. For example: In 2001, SSA removed the full SSN from the Social Security statement and the Social Security cost-of-living-adjustment notice and replaced it with a beneficiary notice code. These two documents represented approximately one-third of all SSA notices sent each year, with approximately 150 million Social Security statements and approximately 58 million cost-of-living-adjustment notices going out each year, according to SSA. However, SSA still displays SSNs on much of its correspondence. According to the SSA Office of the Inspector General, about 66 percent of the 352 million notices sent to individuals in 2015 included the individuals’ full SSNs. SSA officials from the Office of the Deputy Commissioner for Budget, Finance, Quality, and Management stated that they had plans to further reduce SSNs on notices and would implement them as resources permit. IRS (a component of the Department of the Treasury) implemented a system to replace or mask the SSN displayed on many notices and letters sent to taxpayers. IRS officials in the office of Privacy, Governmental Liaison and Disclosure stated that, as of 2017, they had been able to update many notices and letters to either display a barcode, or mask the SSN by displaying only the last four digits of the number. According to the officials, these updates affected 50 million notices in fiscal year 2015 and 47 million in fiscal year 2016. In 2007, the VA Consolidated Mail Outpatient Pharmacy eliminated the use of SSNs on prescription bottles and mailing labels. VHA officials stated that VHA uses the truncated SSN on many of its forms, printouts, and surgical materials. In addition, according to the officials, VHA discontinued printing the full SSN on health records that are disclosed through the Release of Information process and removed or truncated the SSN from patient appointment reminders in 2013. Filtering E-mail to Prevent Unencrypted Transmittal of SSNs Officials from two agencies reported taking additional steps to reduce the potential for SSNs to be compromised by screening e-mail traffic for the numbers and blocking the numbers’ transmittal. Specifically, the Bureau of Economic Analysis in the Department of Commerce implemented a filter on its e-mail system to block both incoming and outgoing -emails containing SSNs. In addition, the Department of Justice upgraded its data loss prevention capabilities to automatically block e-mail traffic to external, nongovernment users when an SSN is detected either in the body of an e-mail or in an e-mail attachment. Reviewing Ongoing Collection, Use, and Display of SSNs Officials from the 24 CFO Act agencies generally stated that they use their already existing information security and privacy management processes and procedures to review ongoing collection, use, and display of SSNs and to ensure that SSNs are protected when stored in agency information systems. Specifically, agencies typically reported using existing processes for developing and approving privacy impact assessments to determine whether new collection, use, or display of SSNs is necessary to achieve an agency mission. For example, CMS, IRS, Department of Transportation, USDA, and VA officials all stated that they use the privacy impact assessment or privacy risk analysis process to confirm that planned collections of SSNs are appropriate and authorized and to assess plans to mitigate the risks of such uses when they are unavoidable. Officials from two of the agencies also reported setting restrictions on access to SSNs and limiting the ability of staff to download and store personal information covered by the Privacy Act, including SSNs, and on the transmission or electronic transfer of such data. For example, CMS officials stated that departmental policy requires encryption of all sensitive data, including SSNs, which are transmitted outside of the hhs.gov domain. VA likewise requires that full SSNs not be transmitted or stored in electronic form unless the data are encrypted. Departmental policy also requires VA components to assign access to data containing SSNs based on need-to-know and least-privilege principles and to use only VA- approved portable electronic storage media to maintain and access records that contain SSNs. Officials from the 24 agencies stated that SSNs cannot be completely eliminated from federal IT systems and records. In some cases, no other identifier offers the same degree of universal awareness or applicability. For example, VHA officials stated that they need to collect SSNs from patients when they receive treatment because health standards require unique identifying information to be verbally provided by patients for verification purposes. According to VHA officials from the Information Access and Privacy Office, the SSN is one of the few unique identifiers that a patient can be expected to have memorized. Thus, eliminating its use is not feasible. SSA officials noted that the Social Security program, as authorized by law, uses the SSN as its primary identifier, and, thus, much of its use within that agency cannot be reduced. Even when reductions are possible, challenges in implementing them can be significant. All of the agencies we reviewed reported experiencing such challenges. Three key challenges were frequently cited: (1) statutes and regulations that mandate the collection of SSNs, (2) requirements for using SSNs in interactions with other federal and external entities, and (3) technological impediments to implementing changes in agency systems and processes. Of the 24 agencies we reviewed, 15 reported to us that they had experienced challenges as a result of statutes and regulations, 16 as a result of required interactions with other Federal and external entities, and 14 as a result of technological limitations, as follows: Statutes and regulations require collection and use of SSNs. In their questionnaire responses and follow-up correspondence with us, officials from 15 agencies who were involved in their agencies’ SSN reduction efforts noted that they are limited in their ability to reduce the collection of SSNs because many laws authorize or require such collection. Examples of such laws are listed in table 1, and the officials cited other laws as well. These laws often explicitly require agencies to use SSNs to identify individuals who are engaged in transactions with the government or who are receiving benefits disbursed by federal agencies. For example, Commerce officials said they are required by the Debt Collection Act of 1996 to collect SSNs for all financial transactions, such as permit applications. Similarly, Department of the Interior officials stated that several statutes require the collection of SSNs for employment, payroll, tax reporting, benefits, and other processes, including the Immigration Reform and Control Act of 1986, the Consolidated Appropriations Act, 2008, and others. Interactions with other federal and external entities require use of the SSN. In order for federal agencies to exchange information about individuals with other entities, both within and outside the federal government, they must be able to cite a unique, common identifier to ensure that they are matching their information to the correct records in the other entities’ systems. The SSN is typically the only identifier that government agencies and external partners have in common that they can use to match their records. USDA’s National Finance Center, for example, uses SSNs to identify employees in its payroll processing systems, and, thus, agencies that use the National Finance Center must include SSNs in their payroll records. Further, other agencies rely on SSNs as unique identifiers when performing other common cross-agency functions, such as processing payments to or from external entities, conducting background investigations, and determining whether an individual has benefit coverage through another agency. For example, an official from the Department of Education stated that the Federal Student Aid program is required to use a loan applicant’s SSN for several key verification functions before being able to process the loan, including with SSA to confirm that the SSN provided is legitimate and that the applicant has registered for the draft; with IRS to ensure the applicant is in good tax standing; with HHS to verify the applicant is not delinquent with child support; and with the Department of Homeland Security to verify the applicant is not on the terrorist watch-list. Technological hurdles can slow replacement of SSNs in information systems. In their questionnaire responses and follow-up correspondence with us, officials from 14 agencies who were involved in their agency SSN reduction efforts cited the complexity of making required technological changes to their information systems as a challenge to reducing the use, collection and display of SSNs. For example, VA officials noted that key software applications and electronic health record formats used in their legacy information systems were developed over 30 years ago and would require extensive system changes and software updates because SSNs are the only identifier used by those systems. Likewise, Department of Treasury officials stated that a majority of their systems had technological limitations that kept them from masking the display of SSNs. According to these officials, they send out “hundreds” of standard notices to individuals but have been able to mask the SSN on only 110 non-payment notices, four payment notices, and 24 automated collection system notices, due to technological limitations. Likewise, although the IRS has been able to mask SSNs on notices that contain barcodes, its current payment processing system is unable to read such barcodes. As a result, the full SSN remains on display on the majority of IRS payment processing notices. SSN reduction efforts in the federal government have also been limited by more readily addressable shortcomings. Lacking direction from OMB, many agencies’ reduction plans did not include key elements, such as timeframes and performance indicators, calling into question the plans’ utility. In addition, OMB has not required agencies to maintain up-to-date inventories of SSN collections and has not established criteria for determining when SSN use or display is “unnecessary,” leading to inconsistent definitions across the agencies. Finally, OMB has not ensured that agencies have all submitted up-to-date status reports and has not established performance measures to monitor agency efforts. Agency SSN Reduction Plans Lacked Key Elements, Limiting Their Usefulness As previously mentioned, in May 2007, OMB issued a memorandum requiring agencies to develop plans to eliminate the unnecessary collection and use of SSNs, an objective that was to be accomplished within 18 months. OMB did not set requirements for agencies on creating effective plans to eliminate the unnecessary collection and use of SSNs. However, other federal laws and guidance have established key elements that performance plans generally should contain. For example, GPRAMA established criteria for effective performance plans, including specific measures to assess performance. Our prior work on developing performance plans identifies additional elements of effective plans, as does OMB’s guidance on budget preparation. Several key elements of an effective performance plan that were consistently referenced across these sources include: Performance goals and indicators: Plans should include tangible and measurable goals against which actual achievement can be compared. Performance indicators should be defined to measure outcomes achieved versus goals. Measurable activities: Plans should define discrete events, major deliverables, or phases of work that are to be completed toward the plan’s goals. Timelines for completion: Plans should include a timeline for each goal to be completed that can be used to gauge program performance. Roles and responsibilities: Plans should include a description of the roles and responsibilities of agency officials responsible for the achievement of each performance goal. The majority of plans originally submitted to OMB by the 24 CFO Act agencies lacked key elements of effective performance plans. For example, only two agencies (the Departments of Commerce and Education) developed a plan that addressed all four key elements. Three agencies’ plans did not fully address any of the key elements, nine plans addressed between one and two of the elements, and the remaining 10 plans addressed three of the elements. Table 2 shows the key elements addressed in each agency’s plan. Across the 24 agencies, the most frequently met criterion was establishing measurable SSN reduction activities and the least frequently met was the development of overall performance goals. For example: Performance goals and indicators: Three agencies established performance goals and indicators to measure progress in their SSN reductions plan. For example, the Department of Education established a goal of eliminating unnecessary SSN use by 5 percent by the second quarter of fiscal year 2010. Measurable activities: Twenty agencies established specific measurable activities in their SSN reduction plans. For example, HHS’s activities included categorizing SSN collections as mandatory or discretionary, developing guidance for review of SSN use, scheduling all 2009-2010 information collections for SSN review, and reviewing expiring 2008 information collections for SSN use. Similarly, the Department of Commerce’s planned activities included eliminating the use of SSNs within four economic surveys at the Census Bureau, the EZ Tracker Training System, and in building access systems at four major facilities. Timelines for completion: Sixteen agencies provided a timeline for completion. For example, the Department of the Interior set completion dates for its major SSN reduction activities, such as establishing an information reduction team by October 12, 2007 and obligating the team to complete all tasks, including updating system of record notices, and enacting component reduction activities, by December 31, 2007. Roles and responsibilities: Fourteen agencies identified roles and responsibilities for reducing agency SSN collection, use, and display. For example, USDA assigned responsibility for departmental compliance with the requirements of OMB M-07-16 to its Chief Information Officer. In addition, the Department of Housing and Urban Development assigned responsibility for tracking progress of SSN reduction activities to its Privacy Act Officer. Agency officials stated that because OMB did not set a specific requirement that SSN reduction plans contain clearly defined performance goals and indicators, measurable activities, timelines for completion, or roles and responsibilities, they were not aware that they should address these elements. Yet, without complete performance plans, it is difficult to determine what overall progress agencies have achieved in reducing the unnecessary collection and use of SSNs and the concomitant risk of exposure to identity theft. Continued progress toward reducing that risk is likely to remain difficult to measure until agencies develop and implement effective plans. Not All Agencies Maintain an Up-to-Date Inventory of Their SSN Collections Developing a baseline inventory of systems that collect, use, and display SSNs and ensuring that it is periodically updated can assist managers in maintaining an awareness of the extent to which they collect and use SSNs and their progress in eliminating unnecessary collection and use. GAO’s Standards for Internal Control in the Federal Government states that a baseline should be established to monitor progress towards an objective. An accurate inventory provides a detailed description of an agency’s current state and helps to clarify what additional work remains to be done to reach the agency’s goal. Of the 24 CFO Act agencies, 22 reported having compiled an inventory of systems and programs that collected SSNs at the time they developed their original SSN reduction plans in fiscal years 2007 and 2008. Of the two agencies that reported not developing an initial inventory, one (U. S. Agency for International Development) reported that it did not have a comprehensive inventory of systems containing SSNs because it has no visibility over unofficial programs and systems, especially those created by overseas missions to address site-specific programmatic and administrative requirements. The agency stated that it was undertaking an effort to create such an inventory and that as part of that process it intended to identify systems that collect and maintain SSNs. The agency anticipated completing its inventory by the end of fiscal year 2017. The other agency without an SSN inventory (Small Business Administration) likewise stated that it was in the process of creating such an inventory, but it did not provide details on when this effort began or when it was expected to be completed. Of the 22 agencies that reported having developed an initial inventory, 18 stated that they had inventories that were up-to-date and complete. However, the inventories of these agencies did not always identify which systems contained SSNs. For example, DOD and SSA officials stated that they maintain an inventory of systems containing PII but do not always track which systems in the inventory contain SSNs. Beyond simply determining which systems contain SSNs, identifying the approximate number of individual records containing SSNs would also be a useful measure for agencies to understand the extent to which any given system contains SSNs. However, agencies have not always captured this information. Education officials, for example, noted that they did not have figures for how many records within each of their student loan systems contained SSNs. DOD, Interior, and State all have many systems containing PII but no estimate of the number of records that include SSNs within each of these systems. Of the 22 agencies that reported having developed an initial inventory, the four remaining agencies stated that they did not have up-to-date inventories of systems containing SSNs. Two of them (Energy and VA) reported having efforts underway to correct or update their inventories. Officials from the Department of Housing and Urban Development and the National Science Foundation stated they faced technical difficulties identifying systems, including contractor-operated systems that contain SSNs. Part of the reason agencies do not have up-to-date inventories is that OMB M-07-16 did not require agencies to develop an inventory or to update the inventory periodically to measure reduction of SSN collection and use. Nevertheless, OMB has recognized the value of maintaining an accurate inventory and as part of the fiscal year 2016 FISMA submission asked agencies to state whether they maintain a written inventory of the collection and use of SSNs. OMB staff stressed that, despite these instructions, they were not requiring agencies to maintain inventories of systems that contain SSNs. However, OMB guidance does require agencies to maintain an inventory of systems that “create, collect, use, process, store, maintain, disseminate, disclose, or dispose of PII.” The OMB guidance states that agencies are required to maintain that inventory in part to allow the agency to reduce its PII to the minimum necessary. Without modifying these PII inventories to indicate which systems contain SSNs and using them to monitor their SSN reduction efforts, agencies will likely find it difficult to measure their progress in eliminating the unnecessary collection and use of SSNs. Agency Definitions of “Unnecessary” Collection and Use Have Been Inconsistent It can be difficult to achieve consistent results from any management initiative when the objectives are not clearly defined. GAO’s Standards for Internal Control in the Federal Government states that management should define objectives in measurable terms so that performance toward achieving those objectives can be assessed. Further, measurable objectives should generally be free of bias and not require subjective judgments to dominate their measurement. However, OMB M-07-16 did not provide clear criteria for determining what would be an unnecessary collection or use of SSNs, leaving agencies to develop their own interpretations. Of the 24 CFO Act agencies, 4 reported that they had no definition of “unnecessary collection and use.” Of the other 20 agencies, 7 reported that their definitions were not documented. Officials from the 7 agencies with undocumented definitions stated that the process of reviewing and identifying unnecessary uses of SSNs was informal and relied on subjective judgments. For example: Other agency officials, including the Privacy Officer from the General Services Administration, the Chief Information and Privacy Officer from the National Science Foundation, and the Chief Privacy Officer from the Small Business Administration, stated that the determination of whether a specific collection or proposed use was necessary was the decision of agency officials involved in various system reviews, including privacy impact assessment review processes and system authority-to-operate approvals. In contrast, officials from the Office of the Chief Privacy Officer of the Department of Education stated that, while they had no written definition of “unnecessary collection and use,” their departmental policy was that SSNs could not be collected or used unless authorized by law, regulation, or executive order, and/or necessary for a documented agency purpose. Further, their policy required documentation indicating that no reasonable alternative existed. Given the varying approaches that agencies have taken to determining whether proposed or actual collections and uses of SSNs are necessary, it is doubtful whether the goal of eliminating unnecessary collection and use of SSNs is being implemented consistently across the federal government. OMB has not subsequently provided criteria for determining “unnecessary collection and use” of SSNs. OMB staff in the Office of Information and Regulatory Affairs stated that they had not developed a precise definition of “unnecessary collection and use” because the circumstances of collection and use of SSNs varied across agencies. However, developing guidance for agencies in the form of criteria for making decisions about what types of collections and uses of SSNs are unnecessary need not be narrowly prescriptive. Until such criteria are established, agency efforts to reduce the unnecessary use of SSNs will likely continue to vary, and, as a result, the risk of unnecessarily exposing SSNs to identity theft may not be mitigated as thoroughly as it could be. Agencies Have Not Always Submitted Up-to-Date Status Reports to OMB, and OMB Has Not Set Performance Measures to Monitor Agency Efforts GAO’s Standards for Internal Control in the Federal Government calls for management to conduct activities to monitor and evaluate performance. The activities can occur at a specific time or for a specific function or process, while the scope and frequency depend primarily on the assessment of risks. Monitoring is essential to help keep initiatives aligned with changing objectives, environment, laws, resources, and risks. It also assesses the quality of performance over time and allows corrective actions to be identified, if necessary, to achieve the original objectives. OMB initially recognized that agency SSN reduction plans needed to be monitored. After reviewing the reduction plans that agencies submitted for fiscal year 2008, OMB reported that the plans displayed varying levels of detail and comprehensiveness and stated that agency reduction efforts would require ongoing oversight. Subsequently, it required agencies to report on their progress annually through their annual FISMA reports. However, OMB did not establish specific performance measures to monitor implementation of agency reduction efforts. OMB’s guidance directed agencies to submit their most current documentation related to their implementation plans and report on progress they had made in eliminating unnecessary uses of SSNs. However, the guidance did not ask for progress in achieving performance measures or targets that had been identified in agency plans. Annual updates submitted by the 24 agencies from fiscal year 2013 through 2015 did not always include up-to-date information about agency efforts and results achieved, making it difficult to monitor whether progress had been made. For example, in each of its reports over this period, the Department of State indicated that it had a review of over 100 systems underway, with little description of whether any progress had been made. Similarly, the Department of Transportation stated in each of its reports that privacy officials continue to work with departmental components to justify, and as appropriate, reduce holdings of PII across systems and business processes. However, none of the reports indicated whether these efforts had been completed or what the results were. Small Business Administration’s updates for all three years consisted of the same document, dated August 2013. OMB staff from the Office of Information and Regulatory Affairs agreed that some agencies had provided the same information year after year in their annual updates, arguing that it was acceptable to do so if all reduction efforts had been completed. However, this was not the case with any of the three agencies, which all indicated that reduction efforts were still underway. Further, other than its initial review in 2008, OMB has only recently begun monitoring agency efforts to reduce SSNs. Specifically, staff from the Office of Information and Regulatory Affairs reported that they performed a review in 2015 and determined that agency efforts had been largely successful. While they did not set specific criteria for measuring performance, they noted that the agencies with the most robust and mature SSN reduction efforts had developed inventories for their SSN collections, defined unnecessary use, and established processes to continue assessing whether SSN collections were necessary over time. However, the OMB staff were unable to provide any documentation of their review. In fiscal year 2016, OMB began asking agencies additional questions about their reduction of SSNs. For example, questions were added to the FISMA reporting metrics which require each agency to indicate whether it has (1) compiled a written inventory of the agency’s collection and use of SSNs, (2) developed and implemented written policies or procedures to ensure that any new collection or use of SSNs is necessary and remains necessary over time, and (3) determined that any existing collection or use of SSNs associated with agency websites, online forms, mobile applications, and other digital services, is necessary and complies with privacy and security requirements. OMB staff in the Office of Information and Regulatory Affairs stated that they expect that the answers to these questions will help inform future reviews of agency programs and will help define metrics for use in future years. Thus, although OMB has taken steps to gather additional information related to agency SSN reduction programs, its monitoring process is still not based on performance measures that could be used to ensure consistent and effective implementation of agency reduction efforts. Without a more rigorous process, it will remain difficult for OMB to determine whether agencies have achieved their goals in eliminating the unnecessary collection and use of SSNs or whether additional actions could be taken to minimize the risk of unnecessarily exposing SSNs to identity theft. Beginning in 2007, following the report of the Identity Theft Task Force, OPM, SSA, and OMB took steps to promote elimination of the unnecessary collection, use, and display of SSNs. However, those efforts had limited success. OPM’s effort to define an alternate identifier ended when it withdrew its proposed rulemaking on the use of SSNs, and SSA’s clearinghouse of key SSN reduction practices is no longer available online. Only OMB’s annual reporting requirement is still ongoing. The 24 agencies we reviewed have responded by taking a number of actions to reduce the use and display of SSNs, either by substituting alternate identifiers or limiting the display of the SSN on forms and/or computer screens. The initiatives agencies have taken show that it is possible to identify and eliminate the unnecessary use and display of SSNs. However, it is difficult to determine what overall progress has been made in achieving this goal across the government. Lacking OMB direction to do so, not all agencies have developed effective SSN reduction plans. In addition, OMB has not required agencies to maintain up-to-date inventories of their collection and use of SSNs and has not established criteria for determining when the collection, use, or display of SSNs is “unnecessary,” leading to inconsistent definitions across the agencies. Finally, OMB has not ensured that agencies have all submitted up-to-date status reports and has not established performance measures to monitor agency efforts. Until OMB adopts more effective practices for guiding agency SSN reduction processes, overall governmentwide reduction efforts will likely remain limited and difficult to measure, and the risk of SSNs being exposed and used to commit identity theft will remain greater than it need be. To improve the consistency and effectiveness of governmentwide efforts to reduce the unnecessary use of SSNs and thereby mitigate the risk of identity theft, we are recommending that the Director of OMB take the following five actions: specify elements that agency plans for reducing the unnecessary collection, use, and display of SSNs should contain and require all agencies to develop and maintain complete plans; require agencies to modify their inventories of systems containing PII to indicate which systems contain SSNs and use the inventories to monitor their reduction of unnecessary collection and use of SSNs; provide criteria to agencies on how to determine unnecessary use of SSNs to facilitate consistent application across the federal government; take steps to ensure that agencies provide up-to-date status reports on their progress in eliminating unnecessary SSN collection, use, and display in their annual FISMA reports; and establish performance measures to monitor agency progress in consistently and effectively implementing planned reduction efforts. We provided draft copies of this report to OMB and the 24 CFO Act agencies included in our review. OMB did not provide comments on the draft report or our recommendations. We received written comments from one agency, SSA, which are reprinted in appendix III. In its comments, the agency stated that it has taken steps, where possible, to discontinue the use of the SSN in its two largest annual notice workloads and in many internal administrative processes. SSA added that it remains committed to removing the SSN from its remaining notices. In addition, SSA, along with eight other agencies, provided technical comments or information on their current SSN reduction policies, which have been incorporated into the final report as appropriate. These agencies are the Departments of Commerce, Education, Health and Human Services, Homeland Security, the Interior, Justice and Veterans Affairs, and the General Services Administration. For example, a Program Analyst in General Services Administration’s Audit Management Division stated that each system containing PII requires a full privacy impact assessment that is completed by the system owner or program manager in coordination with the Privacy Office. The official also stated that new and current system owners are encouraged not to collect SSNs or other PII unless there is a good business case. Further, the Department of Interior Audit Liaison stated that the department is revising its SSN reduction policy to address the findings and recommendations to OMB outlined in our report. The official stated that the department will work closely with its bureaus and offices to implement the updated SSN reduction policy, maintain current SSN inventories, and establish procedures and a standard reporting template to identify and eliminate the unnecessary collection and use of SSNs. Lastly, 15 agencies indicated via e-mail that they had no comments on the report. These agencies were the Departments of Agriculture, Defense, Energy, Housing and Urban Development, Labor, State, Transportation, and the Treasury, and the Agency for International Development, Environmental Protection Agency, National Aeronautics and Space Administration, National Science Foundation, Nuclear Regulatory Commission, Office of Personnel Management, and Small Business Administration. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until two days from the report date. At that time, we will send copies to the Departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, the Interior, Justice, Labor, State, Transportation, the Treasury, and Veterans Affairs; Agency for International Development; Environmental Protection Agency; General Services Administration; National Aeronautics and Space Administration; National Science Foundation; Nuclear Regulatory Commission; Office of Management and Budget; Office of Personnel Management; Small Business Administration; and Social Security Administration. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our objectives were to determine (1) what governmentwide initiatives have been undertaken to assist agencies in eliminating their unnecessary use of SSNs and (2) the extent to which agencies have developed and executed plans to eliminate the unnecessary use and display of SSNs and have identified challenges associated with those efforts. To determine what governmentwide initiatives have been undertaken to assist agencies in eliminating their unnecessary use of SSNs, we examined key governmentwide guidance documents, including reports issued by the Identity Theft Task Force and identified roles and responsibilities assigned to the Office of Management and Budget (OMB), the Office of Personnel Management (OPM), and the Social Security Administration (SSA). We also reviewed federal laws, including the Privacy Act, the E-Government Act, and Federal Information Security Modernization Act of 2014 to clarify roles and responsibilities. To identify the results of governmentwide efforts, we analyzed reports and guidance on protecting SSNs issued by OMB, OPM, and SSA, and interviewed agency officials knowledgeable about the reduction efforts regarding their activities. To determine the extent to which agencies developed and executed plans to eliminate the unnecessary use and display of SSNs, we analyzed documents from the 24 agencies covered by the Chief Financial Officers (CFO) Act which described the progress of efforts in this area. For example, we reviewed agency implementation plans and updates submitted as part of their Federal Information Security Modernization Act reports for fiscal years 2007 and 2008 (the first two years that such reports addressed SSN reduction efforts) as well as for 2013, 2014 and 2015 (the three most recent reports available at the time of our review). We compared agency plans with key elements of effective performance plans, as defined in federal guidance and the Government Performance and Results Act Modernization Act of 2010. To identify challenges that agencies experienced in implementing these efforts we interviewed relevant officials at each of the 24 agencies. We obtained and analyzed additional information about SSN reduction policies and activities from a selection of the 24 agencies included in this review. To select these agencies, we identified major agencies in the military, international, or security/national security area as well as agencies that deliver benefits to the general public. Within these groups, we selected the two agencies with the largest number of systems and programs that use SSNs. We also selected the Internal Revenue Service (IRS) because it collects a large number of taxpayer SSNs and OPM because it collects SSNs from all federal workers. This resulted in the selection of six of the 24 agencies or components thereof: the Centers for Medicare & Medicaid Services (CMS), a component of the Department of Health and Human Services (HHS); the United States Department of Agriculture (USDA); Army, a component of the Department of Defense (DOD); the Department of Veterans Affairs (VA); the IRS, a component of the Department of the Treasury; and OPM. To obtain additional information on agency SSN use and efforts to reduce the unnecessary collection and use of SSNs, we administered a questionnaire to the 24 CFO Act agencies. After we drafted the questionnaire, we consulted with GAO survey methodologists to ensure the wording of our questions was objective. We also conducted pretests to ensure that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place an undue burden on agency officials, (4) the information could feasibly be obtained, and (5) the questionnaire was comprehensive and unbiased. We chose to pretest the questionnaire with the Chief Privacy Officer at the Department of Energy and with GAO’s Record’s Officer because of their knowledge of SSN use and protection issues. We conducted the pretests in person and made changes to the content and format of the questionnaire after the pretests, based on the feedback we received. The finalized questionnaire used for this study is reprinted in appendix II. We sent the questionnaire to all 24 CFO Act agencies by e-mail in an attached PDF form that respondents could return electronically after marking checkboxes or entering responses into open answer boxes. Alternatively, respondents could return the questionnaire by mail after printing the form and completing it by hand. We sent the questionnaire with an e-mail on July 25, 2016. Two weeks later, we sent a reminder e-mail to each agency that had not responded. We e-mailed or telephoned all respondents who had not returned the questionnaire after 3 weeks and reminded them to participate. All questionnaires were returned by August 22, 2016. Because this was not a sample questionnaire, it has no sampling errors. However, the practical difficulties of conducting any questionnaire may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question, sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the results. We took steps in developing the questionnaire, collecting the data, and analyzing them to minimize such nonsampling error. For example, survey specialists designed the questionnaire in collaboration with GAO staff who had subject matter expertise. Lastly, to identify specific examples of agency actions to reduce the collection, use, and display of SSNs, we obtained additional information from the six agencies or components that we selected for further review. We obtained and analyzed additional documentation from these agencies and held additional discussions with agency officials. We conducted this performance audit from April 2016 to July 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To obtain more detailed information on agency SSN use and efforts to reduce the unnecessary collection and use of SSNs, we administered a questionnaire to the 24 agencies that we selected for review. We sent the questionnaire on July 25, 2016 and received all responses by August 22, 2016. In the questionnaire, we asked the following questions: 1. Does your agency and/or contractors collect and use SSNs in any systems and programs1? Yes (Continue to Question 2) No (STOP. Please return survey to GAO) 2. How many of your agency’s and contractors’ systems and programs collect and use SSNs? Number of Systems and Programs: # ________________ 3. For which of the following reasons do your systems and programs collect and use SSNs? (Check all that apply) A. Federal Employment (hiring, pay, benefits) B. Government Benefits/Services (including, but not limited to: debt collection, entitlement programs or benefits, grant programs, healthcare, loans, and other services) C. Criminal Law Enforcement D. Statistical and other Research Purposes F. Other (please describe) 4. Does your agency and/or contractors collect and use SSNs from the members of the public, contractors, or agency employees? (Check all that apply) 5. The Office of Management and Budget (OMB) Memorandum M-07-16 required agencies to develop a plan to reduce the unnecessary collection and use of SSNs. For this purpose, did your agency define what would constitute an unnecessary collection and use of SSNs? Yes. Please add your agency’s definition of unnecessary here: No. 6. In response to OMB M-07-16, did your agency develop a baseline inventory of agency and contractor systems and programs2 that collected SSNs as part of your initial plan/efforts to reduce the unnecessary collection and use of SSNs? Yes. No. If no, please explain why. 7. Does your agency have a current and complete inventory of agency and contractor systems and programs that collect and use SSNs? Yes. If yes, please provide the inventory to GAO in an EXCEL file format. Please include the name of each system and program and the approximate number of records in each, as of June 30, 2016. No. If no, please explain why. 8. Since the issuance of OMB M-07-16, has your agency conducted or participated in any: committees, task forces, inter-agency committees, external groups or associations, or other governance groups whose purpose included the reduction of unnecessary collection and use of SSNs in governmental systems and programs? Yes – Please answer question 9 No – Please continue to question 10 Don’t know – Please continue to question 10 9. Please provide the following information regarding your participation in EACH group. External Group (private sector) a. When was the group formed? (MM/YYYY) ____________ b. Is the group still in operation? No. If No, when did the group stop operating? (MM/YYYY) c. What is your level of participation in this group? Leadership Role (chair, co-chair) d. Briefly describe the purpose and major goals or initiatives of this 10. Since the issuance of OMB M-07-16, please describe the challenges, if any, your agency has faced in reducing the unnecessary collection and use of SSNs. 11. Does your agency have any suggestions or additional information that could be helpful to the continued government efforts to reduce of unnecessary collection and use of SSNs? In addition to the contact named above, John de Ferrari (Assistant Director), Andrew Beggs, Marisol Cruz, Quintin Dorsey, David Plocher, Priscilla Smith, and Shaunyce Wallace made key contributions to this report. | The federal government uses SSNs as unique identifiers for many purposes, including employment, taxation, law enforcement, and benefits. However, SSNs are also key pieces of identifying information that potentially may be used to perpetrate identity theft. GAO was asked to review federal government efforts to reduce the collection and use of SSNs. This report examines (1) what governmentwide initiatives have been undertaken to assist agencies in eliminating their unnecessary use of SSNs and (2) the extent to which agencies have developed and executed plans to eliminate the unnecessary use and display of SSNs and have identified challenges associated with those efforts. To do so, GAO analyzed reports and guidance on protecting SSNs. GAO also analyzed SSN reduction plans and other documents, administered a questionnaire, and interviewed officials from the 24 CFO Act agencies. Governmentwide initiatives aimed at eliminating the unnecessary collection, use, and display of Social Security Numbers (SSN) have been underway in response to recommendations that the presidentially appointed Identity Theft Task Force made in 2007 to the Office of Personnel Management (OPM), the Office of Management and Budget (OMB), and the Social Security Administration (SSA). However, these initiatives have had limited success. In 2008, OPM proposed a regulation requiring the use of an alternate federal employee identifier but withdrew it in 2010 because no such identifier was available. OMB required agencies to develop SSN reduction plans and requires annual reporting on agency SSN reduction efforts. SSA developed an online clearinghouse of best practices for reducing SSN use; however, it is no longer available online. Based on responses to GAO's questionnaire, the 24 agencies covered by the Chief Financial Officers (CFO) Act use SSNs for various purposes (see figure). All 24 CFO Act agencies developed SSN reduction plans and reported taking actions to curtail the use and display of SSNs. For example, the Department of Defense replaced SSNs, which previously appeared on its identification cards, with new identification numbers. Nevertheless, the agencies cited impediments to further reductions, including (1) statutes and regulations mandating SSN collection, (2) use of SSNs in necessary interactions with other federal entities, and (3) technological constraints of agency systems and processes. Further, poor planning by agencies and ineffective monitoring by OMB have also limited efforts to reduce SSN use. Lacking direction from OMB, many agencies' SSN reduction plans did not include key elements, such as time frames and performance indicators, calling into question their utility. In addition, OMB has not required agencies to maintain up-to-date inventories of their SSN holdings or provided criteria for determining “unnecessary use and display,” limiting agencies' ability to gauge progress. OMB also has not ensured that agencies update their progress in annual reports or established performance metrics to monitor agency efforts. Until OMB requires agencies to adopt better practices for managing their SSN reduction processes, overall governmentwide reduction efforts will likely remain limited and difficult to measure. GAO recommends that OMB require complete plans for ongoing reductions in the collection, use, and display of SSNs, require inventories of systems containing SSNs, provide criteria for determining “unnecessary” use and display, ensure agencies update their progress in annual reports, and monitor agency progress based on clearly defined performance measures. OMB did not comment on GAO's recommendations. We received written comments from SSA and technical comments from eight other agencies, which were incorporated into the final report as appropriate. The other 15 agencies did not provide comments. |
Palau consists of 8 main islands and more than 250 smaller islands, with a total land area of roughly 190 square miles, located approximately 500 miles southeast of the Philippines. About 20,000 people live in Palau, concentrated largely in one urban center around the city of Koror, and more than one-quarter of the population is non-Palauan. Palau’s economy is heavily dependent on its tourism sector and on foreign aid from the United States, Japan, and Taiwan. Similar to many small island economies, Palau’s public sector spending represents a significant percentage of its gross domestic product (GDP). U.S. relations with Palau began when American forces liberated the islands near the end of World War II. In 1947, the United Nations assigned the United States administering authority over the Trust Territory of the Pacific Islands, which included what are now the Federated States of Micronesia, the Republic of the Marshall Islands, the Commonwealth of the Northern Mariana Islands, and Palau. Palau adopted its own constitution in 1981. The U.S. and Palau governments concluded a Compact of Free Association in 1986; the compact entered into force on October 1, 1994. The Department of the Interior’s (Interior) Office of Insular Affairs (OIA) has primary responsibility for monitoring and coordinating all U.S. assistance to Palau, and the Department of State (State) is responsible for government-to-government relations. Key provisions of the compact and its subsidiary agreements address the sovereignty of Palau, types and amounts of U.S. assistance, security and defense authorities, and periodic reviews of compact terms. Table 1 summarizes key provisions of the Palau compact and related subsidiary agreements. In addition to the U.S. assistance provided under the compact, U.S. agencies—the Department of Education, the Department of Health and Human Services (HHS), and Interior, among others—provide discretionary federal programs in Palau as authorized by U.S. legislation and with appropriations from Congress. (See app. II for a complete listing of these programs in Palau.) In our 2008 report, we projected that U.S. assistance to Palau from 1995 through 2009 would exceed $852 million. Of this total, economic assistance under the compact accounts for a projected 68 percent and discretionary federal programs account for a projected 31 percent (see fig. 1). The September 2010 Agreement between the U.S. and Palau governments would extend assistance to Palau to 2024 but steadily reduce the annual amount provided. The Agreement would also extend the authority and framework for U.S. agencies to continue compact federal services and discretionary federal programs. Key provisions of the Agreement would include, among others, extending direct economic assistance to Palau, providing infrastructure project grants and contributions to an infrastructure maintenance fund, establishing a fiscal consolidation fund, and making changes to the trust fund. U.S. assistance to Palau under the Agreement would total approximately $215 million through 2024. Legislation implementing the Agreement was not approved by Congress during 2011. Department of the Interior provided $13.1 million for direct economic assistance in 2011 and again in 2012; however, funds were not provided either year for infrastructure projects, the infrastructure maintenance fund, or the fiscal consolidation fund. Direct economic assistance ($107.5 million). Under the Agreement, the U.S. government would provide direct economic assistance— budgetary support for Palau government operations and specific needs such as administration of justice and public safety, health, and education—amounting to $13 million in 2011 and declining to $2 million by 2023. The Agreement also calls for the U.S. and Palau governments to establish a five-member Advisory Group to provide annual recommendations and timelines for economic, financial, and management reforms. The Advisory Group must report on Palau’s progress in implementing these or other reforms, prior to annual U.S.- Palau economic consultations. These consultations are to review Palau’s progress in achieving reforms such as improvements in fiscal management, reducing the public sector workforce and salaries, reducing government subsidization of utilities, and tax reform. If the U.S. government determines that Palau has not made significant progress in implementing meaningful reforms, direct assistance payments may be delayed until the U.S. government determines that Palau has made sufficient progress. Infrastructure projects ($40 million). Under the Agreement, the U.S. government would provide U.S. infrastructure project grants to Palau for mutually agreed infrastructure projects—$8 million annually through 2013, $6 million in 2014, and $5 million in both 2015 and 2016. The Agreement requires Palau to provide a detailed project budget and certified scope of work for any projects receiving these funds. Infrastructure maintenance fund ($28 million). Under the Agreement, the U.S. government would make contributions to a fund to be used for maintenance of U.S.-financed major capital improvement projects, including the Compact Road and Airai International Airport. Through 2024, the U.S. government would contribute $2 million annually, and the Palau government would contribute $600,000 annually to the fund. Fiscal consolidation fund ($10 million). Under the Agreement, the U.S. government would provide grants of $5 million each in 2011 and 2012, respectively, to help the Palau government reduce its debts. Unless agreed to in writing by the U.S. government, these grants cannot be used to pay any entity owned or controlled by a member of the government or his or her family, or any entity from which a member of the government derives income. U.S. creditors must receive priority, and the government of Palau must report quarterly on the use of the grants until they are expended. Trust fund ($30.25 million). Under the Agreement, the U.S. government would contribute $30.25 million to the fund from 2013 through 2023. The government of Palau will reduce its previously scheduled withdrawals from the fund by $89 million. From 2024 through 2044, Palau can withdraw up to $15 million annually, as originally scheduled. Moneys from the trust fund account cannot be spent on state block grants, operations of the office of the President of Palau, the Olibiil Era Kelulau (Palau National Congress), or the Palau Judiciary. Palau must use $15 million of the combined total of the trust fund disbursements and direct economic assistance exclusively for education, health, and the administration of justice and public safety. Annual U.S. assistance to Palau under the Agreement would decline from roughly $28 million in 2011 to $2 million in 2024. Figure 2 details the timeline and composition of assistance outlined in the Agreement. The Agreement would extend the authority for the provision of compact federal services and discretionary programs in Palau. Federal services. The Agreement would amend the compact’s subsidiary agreements regarding federal services. Specifically, the Agreement amends the terms of postal, weather, and aviation services to Palau. Federal discretionary programs. The Agreement would extend the framework for U.S. agencies to provide discretionary federal programs to Palau, with implementation of the programs contingent on annual appropriations to those agencies. The addition of $30.25 million in U.S. contributions and the delay of $89 million in Palau withdrawals through 2023, as provided by the Agreement, would improve the fund’s prospects for sustaining scheduled payments through 2044. At the end of June 2012, the fund had a balance of approximately $163 million. The trust fund would need a 5.0 percent annual return to yield the proposed withdrawals from 2011 through 2044 under the Agreement. This rate is well below the 7.9 percent return that the fund earned from its inception to June 30, 2012. Figure 3 shows projected trust fund balances in 2012 through 2044 under the Agreement, with varying rates of return. The additional contributions and reduced withdrawals scheduled in the Agreement would also make the trust fund a more reliable source of revenue under conditions of market volatility. With these changes, the trust fund would have an approximately 90 percent probability of sustaining payments through 2044. In comparison, the fund has a 40 percent probability of sustaining the $15 million annual withdrawals scheduled under the compact through 2044. Figure 4 compares the probability that the trust fund will sustain the proposed withdrawals under the terms outlined in the Agreement with the probability that the trust fund will sustain the withdrawals scheduled under the compact. Estimates prepared for the government of Palau project that Palau’s reliance on U.S. assistance provided under the Agreement will decline, while its reliance on trust fund withdrawals and domestic revenue will increase. These estimates show U.S. assistance, as provided under the Agreement, declining from 28 percent of government revenue in 2011 to less than 2 percent of government revenue in 2024. The estimates also show Palau’s trust fund withdrawals growing from 5 percent of government revenue in 2011 to 12 percent in 2024. In addition, the estimates indicate that Palau’s domestic revenue will rise from 40 percent of all government revenue in 2011 to 59 percent in 2024. Finally, the estimates prepared for Palau project a relatively steady reliance on U.S. discretionary federal programs, ranging from 12 percent of all government revenue in 2011 to 14 percent in 2024. The estimates assume that discretionary federal programs will grow at the rate of inflation; however, discretionary programs are subject to annual appropriations and may not increase over time. Figure 5 shows the types and amounts of Palau’s estimated revenues for 2011 and 2024. The estimates prepared for the government of Palau project that U.S. assistance to Palau from 2011 through 2024, including discretionary federal programs, will total approximately $427 million. The estimates further project that discretionary programs will account for nearly half of U.S. assistance through 2024, with assistance amounts specified in the Agreement accounting for the other half. (See fig. 6.) In contrast, in 2008, we estimated discretionary program funding accounted for less than one- third of total U.S. assistance to Palau from 1995 through 2009. Legislation has been introduced in both the Senate and the House that would approve and implement the September 2010 agreement between the U.S. and Palau governments. In February 2011, a bill was introduced in the Senate that would implement the Agreement, as written. The Senate bill would authorize and appropriate funds to Interior for specified assistance. The Senate bill would also extend the authority, and authorize appropriations, for the provision of compact federal services in Palau. However, the proposed legislation does not appropriate funds for compact federal services. As of September 2012, the Senate has not acted on this bill. In June 2012, a bill was introduced in the House that would approve and implement the Agreement, with some modifications. Specifically, the pending House bill: Shifts the timing of the provision of some specified Agreement assistance to account for the fact that fiscal year 2011 has passed. Extends the full faith and credit provision of the compact to the U.S. commitments of assistance under the Agreement for direct economic assistance, the trust fund, the infrastructure maintenance fund, the fiscal consolidation fund, and infrastructure projects. Applies an inflation adjustment to the Agreement assistance for direct economic assistance and infrastructure project grants, and payments to the trust fund, infrastructure maintenance fund, and fiscal consolidation fund. Extends a pledge of the full faith and credit of the United States for the full payment of the amounts necessary to conduct the audits of the assistance provided, as called for under the Agreement. In addition, the Senate and House bills implementing the Agreement would amend the sections of the Agreement that extend the authority for the provision of compact federal services and discretionary programs in Palau. The proposed Senate and House legislation would authorize annual appropriations for weather and aviation services. The proposed Senate and House legislation would extend the eligibility of the people, government, and institutions of Palau for certain discretionary programs, including special education and Pell grants. However, the proposed bills differ in how they would authorize appropriations to subsidize postal service to Palau, the Republic of the Marshall Islands, and the Federated States of Micronesia. The Senate legislation would have authorized appropriations of $1.5 million to Interior for 2011 through 2024, to subsidize postal services provided by the U.S. Postal Service. The proposed House legislation would authorize appropriations of $1.5 million to Interior beginning in 2012 and through 2024, to subsidize postal services. Under the proposed House bill, Interior would be authorized to transfer these funds to the U.S. Postal Service under the condition that domestic postage may be used for mail to these countries. Chairman Fleming, Ranking Member Sablan, and Members of the Subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For further information about this statement, please contact David Gootnick at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Emil Friberg (Assistant Director), Ming Chen, David Dayton, Brian Hackney, Reid Lowe, Grace Lui, and Valérie L. Nowak made key contributions to this statement. Robert Alarapon, Benjamin Bolitzer, Rhonda Horried, Farahnaaz Khakoo, Jeremy Sebest, Cynthia Taylor, and Anu Mittal provided technical assistance. Table 2 shows the assistance provided to Palau under the compact from 1995 through 2009. Table 3 shows the proposed assistance to Palau for 2011 through 2024, as outlined in the Agreement. Table 4 lists discretionary U.S. federal program funds expended by the Palau national government, the Palau Community College, and the Palau Community Action Agency, as reported in the organizations’ single audit reports for 2009. | The Compact of Free Association between the United States and the Republic of Palau, which entered into force in 1994, provided for several types of assistance aimed at promoting Palaus self-sufficiency and economic advancement. Included were 15 years of direct assistance to the Palau government; contributions to a trust fund meant to provide Palau $15 million each year in fiscal years 2010 through 2044; construction of a road system, known as the Compact Road; and federal services such as postal, weather, and aviation. U.S. agencies also provided discretionary federal programs related to health, education, and infrastructure. In 2008, GAO projected that total assistance in fiscal years 1994 through 2009 would exceed $852 million. In September 2010, the United States and Palau signed an agreement (the Agreement) that would, among other things, provide for additional assistance to Palau beginning in fiscal year 2011 and modify its trust fund. Currently, there are two bills pending before Congress to implement the Agreement. In this testimony, GAO updates a November 2011 testimony on (1) the Agreements provisions for economic assistance to Palau, (2) its impact on the trust funds likelihood of sustaining scheduled payments through fiscal year 2044, (3) the projected role of U.S. assistance in Palau government revenues, and (4) the pending legislation to implement the Agreement. GAO reviewed current trust fund data and new pending legislation for this testimony. The Agreement would provide decreasing assistance, totaling approximately $215 million through fiscal year 2024 and includes the following: direct economic assistance ($107.5 million) for Palau government operations; infrastructure project grants ($40 million) to build mutually agreed projects; infrastructure maintenance fund ($28 million) for maintaining the Compact Road, Palaus primary airport, and certain other major U.S.-funded projects; fiscal consolidation fund ($10 million) to assist Palau in debt reduction; and trust fund contributions ($30.25 million) in addition to the $70 million contributed under the compact Under the Agreement, the United States would contribute to the trust fund in fiscal years 2013 through 2023, and Palau would reduce its withdrawals by $89 million in fiscal years 2010 through 2023. GAO projects that the fund would have a 90 percent likelihood of sustaining payments through fiscal year 2044 with these changes, versus 40 percent without these changes. Estimates prepared for the Palau government project declining reliance on U.S. assistance under the Agreementfrom 28 percent of government revenue in fiscal year 2011 to 2 percent in fiscal year 2024and growing reliance on trust fund withdrawals and domestic revenues. The estimates show trust fund withdrawals rising from 5 percent to 24 percent and domestic revenues rising from 40 to 59 percent, of total government revenue. According to the estimates, U.S. assistance in fiscal years 2011 through 2024 would total $427 million, with discretionary federal programs accounting for about half of that amount. Congress has not approved legislation to implement the Agreement as of September 2012. Pending Senate legislation would implement the Agreement and appropriate funds to do so. Pending House legislation would implement the agreement, apply an inflation adjustment to assistance payments, and shift the timing of certain assistance payments to reflect the fact that 2011 has passed. |
Regulations generally start with an act of Congress and are the means by which statutes are implemented and specific requirements are established. The statutory basis for a regulation can vary in terms of its specificity, from (1) very broad grants of authority that state only the general intent of the legislation and leave agencies with a great deal of discretion as to how that intent should be implemented to (2) very specific requirements delineating exactly what regulatory agencies should do and how they should do it. For example, the Agricultural Adjustment Act provides a broad grant of authority to the Secretary of Agriculture, stating only that agricultural marketing should be “orderly” but providing little guidance regarding which crops should have marketing orders or how to apportion the market among growers. On the other hand, the Department of Transportation (DOT) has concluded that it has no discretion in setting the average fuel economy standards (known as the “Corporate Average Fuel Economy” or “CAFE” standards) for light trucks. DOT’s 1998 appropriations act stated that “(n)one of the funds in this Act shall be available to prepare, propose, or promulgate any regulations (prescribing CAFE standards for automobiles) . . . in any model year that differs from standards promulgated for such automobiles prior to the enactment of this section.” At the time this appropriations act was enacted, DOT was preparing the CAFE standard for model year 2000. Therefore, DOT concluded that it was required to keep the same light truck CAFE standard for model year 2000 that applied to model year 1999—20.7 miles per gallon. “As the number and complexity of federal statutory programs has increased over the last fifty years, Congress has come to depend more and more upon Executive Branch agencies to fill out the details of the programs it enacts . . . . As more and more of Congress’ legislative functions have been delegated to federal regulatory agencies, many have complained that Congress has effectively abdicated its constitutional role as the national legislature in allowing federal agencies so much latitude in implementing and interpreting congressional enactments . . . .” “Because Congress is often unable to anticipate the numerous situations to which the laws it passes must apply, Executive Branch agencies sometimes develop regulatory schemes at odds with congressional expectations . . . . Rules can be surprisingly different from the expectations of Congress or the public. Congressional review gives the public the opportunity to call the attention of politically accountable, elected officials to concerns about new agency rules.” Similar concerns about agencies’ regulatory actions have led Congress to establish analytical requirements that agencies must comply with during the rulemaking process. For example, the Regulatory Flexibility Act of 1980, as amended, requires agencies to analyze the anticipated effects of rules they plan to propose on small entities unless they certify that the rules will not have a “significant economic impact on a substantial number of small entities.” Title II of the Unfunded Mandates Reform Act of 1995 requires federal agencies (other than independent regulatory agencies) to prepare written statements for certain rules. Those written statements must, among other things, contain a qualitative and quantitative assessment of the anticipated costs and benefits of the rules. Various executive orders have imposed similar analytical requirements on federal agencies. see Regulatory Reform: Major Rules Submitted for Congressional Review During the First 2 Years (GAO/GGD-98-102R, Apr. 24, 1998). “ignores the fact that the key decisions occur when Congress writes an Occupational Safety and Health Act or an amendment to the Food, Drug, and Cosmetics Act or any other important regulatory law, usually with hundreds of pages of detailed specifications. . . . The way those statutes are written frequently precludes the agencies from even considering the most cost-effective approaches.” Therefore, CED concluded that the traditional focus of regulatory reform should be shifted from regulatory agencies to Congress. CED recommended, among other things, that each congressional committee be required, when writing a regulatory statute, to articulate the expected benefits and costs of the regulatory program in the report accompanying the legislation. It also recommended that Congress eliminate provisions in existing statutes that prevent or limit regulatory agencies from considering costs or comparing expected benefits with costs. Several of our recent reports and testimonies have raised the issue of whether regulatory burden was based on the underlying statutes. As noted previously, in our 1996 reports on which this review is based, the agencies responding to some of the companies’ concerns said that the specific requirements that the businesses mentioned were statutorily driven. We noted in our November 1996 report that we did not review the regulations and statutes that the agencies cited to determine whether the underlying statutes required the regulatory provisions that were of concern to the companies. However, we said that if the statutes do not require those regulatory provisions, the agencies have a responsibility to address those concerns on their own and not shift the responsibility to Congress. We also said that if Congress believes an agency’s regulation is inconsistent with the intent of the underlying statute, Congress could amend the statute to reflect current congressional intent and, in effect, require the agency to amend its regulation. In three reviews of agencies’ implementation of the Paperwork Reduction Act of 1995, we reported that agencies believed the paperwork burden associated with their regulations had increased since the act was passed because of congressionally imposed requirements. As a result of such requirements, we said that some agencies believed that they were limited in the amount to which they can reduce their paperwork burden. For example, the Internal Revenue Service (IRS) said it could not reach the burden reduction goals established in the Paperwork Reduction Act under the current statutory framework and still carry out its mission. We noted that we had not assessed the extent to which the paperwork burden agencies impose is directly a consequence of statutory requirements and, therefore, is out of the agencies’ control. However, we also noted that if agencies’ paperwork requirements are truly statutorily mandated, those agencies may not be able to reduce their burden-hour estimates by the amounts envisioned in the 1995 act without changes in the legislation underlying those requirements. In our 1997 review of four agencies’ efforts to eliminate or revise pages in the Code of Federal Regulations (CFR), we found that two of the four agencies had added more pages to the CFR than they deleted. Agency officials said that statutory requirements imposed by Congress often drive CFR page additions, and they provided several examples of those statutory requirements. However, we did not examine those statutes to determine the extent to which they required the CFR page additions. This review focuses on a subset of the 125 regulatory concerns that companies cited in our 1996 reports—the concerns that federal agencies indicated were, at least in part, attributable to the statutes underlying the relevant regulatory provisions. Our objectives were to determine, for each such concern, (1) the amount of discretion the underlying statutes gave the agencies in developing the regulatory requirements, (2) whether the regulatory requirements at issue were within the authority granted by the underlying statutes, and (3) whether the rulemaking agencies could have developed regulatory approaches that would have been less burdensome to the regulated entities while still meeting the underlying statutory requirements. Appendix I provides a detailed discussion of our scope and methodology. In brief, we identified the 27 company concerns that we focused on in this review by (1) subdividing some concerns in our December 1996 report to facilitate the analysis; and (2) eliminating some of the concerns that were too broad or that focused only on federal statutes, not agencies’ regulatory requirements. For example, in one concern company officials said that the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) was expensive and exposed the company to unforeseen liability, but the officials did not cite any EPA regulations in their concern. The 27 concerns were raised by officials from 10 of the 15 companies we visited during the preparation of our 1996 reports, 7 of whom asked that we use generic descriptors such as “Bank A” or “a paper company” to identify them. A total of 11 federal departments and agencies issued the regulations underlying the 27 company concerns at issue in the report. To address our first objective we reviewed the statutory provisions underlying each of the concerns and coded the level of discretion that we believed those provisions permitted the agencies in developing the specific regulatory requirements at issue in the concerns into one of three categories— “no discretion,” “some discretion,” or “broad discretion.” We coded statutory provisions as allowing rulemaking agencies “no discretion” if they delineated specific actions that regulated entities or the agencies themselves must take and did not allow the agencies to develop their own regulatory requirements. We coded statutory provisions as allowing the agencies “some discretion” if they delineated certain requirements that had to be included in the regulations but gave the agencies at least some discretion regarding other requirements (e.g., the timing or frequency of a reporting requirement). We coded statutory provisions as allowing the agencies “broad discretion” if they contained few specific requirements or imposed few to no constraints on what the agencies had to include in their regulations. To address our second objective, we compared the relevant statutory and regulatory provisions for each concern and decided whether we believed the regulatory requirements at issue in the concerns were within the authority granted by the underlying statutes. We coded the regulatory provisions as being within the authority granted by the statutes if (1) the statutory provision gave the agency no discretion in how the regulations could be developed and the regulatory provision strictly adhered to the statutory requirements; or (2) the statutory provision gave the agency some or broad discretion, and the regulatory provision was consistent with the requirements or the limitations in the statute. To address our third objective we examined our answers to the previous objectives and decided whether we believed the rulemaking agencies could have developed regulatory approaches that would have been less burdensome to the regulated entities while still meeting the underlying statutory requirements. If the underlying statutes gave an agency no rulemaking discretion and the agency adopted regulations that strictly adhered to the statutory requirements, we concluded that the agency could not have developed a less burdensome regulatory approach. If the underlying statutes gave an agency some or broad rulemaking discretion, we were not able to determine if the agencies could have developed a less burdensome approach. To do this we would have needed detailed information on how the agencies’ regulations were being implemented and how alternative approaches would be perceived by the regulated entities in order to determine whether a less burdensome approach was available. As is discussed in Appendices III and IV, that information was not readily available. Because this review is based on a subset of the company concerns and agency responses originally presented in our 1996 reports, the results of our analysis are not generalizable to other companies, other regulatory issues, or even to all of the original 125 regulatory concerns. However, as we pointed out in our November 1996 report, the companies’ comments were similar in many respects to comments made by companies in some of our previous reports and in the literature. Therefore, we believe that the companies’ comments, the agencies’ responses, and our analysis of the related regulations and statutes are not atypical and can provide some insights regarding the broader issues addressed in this report. This report reflects the views of selected companies and regulatory agencies gathered during our earlier effort but does not reflect the views of other individuals and organizations that may be affected by the regulations at issue (e.g., labor unions or potential beneficiaries). We did not attempt to determine whether the companies’ or the agencies’ views were correct with regard to issues that were outside the scope of this review (e.g., whether any of the agencies’ actions were, in fact, “burdensome”). Although we approached this review systematically, our conclusions are ultimately matters of judgement, not determinations that have a legally binding effect on the agencies issuing the rules or the regulated community. The report focuses primarily on the amount of discretion that the relevant statutes gave rulemaking agencies in developing the regulatory requirements at issue in the companies’ concerns. However, the report does not address the amount of discretion that the agencies had in writing regulations outside of the specific issues raised by the companies. Agencies may have broad discretion in how regulations can be developed within a general area, but little or no discretion with regard to particular issues within those areas. Also, the report does not address enforcement issues. As our 1996 reports indicated, agencies may have considerable discretion in carrying out their enforcement authority, and the use of that discretion can significantly affect the burden felt by regulated entities. For example, several companies expressed concerns about rigid and inflexible regulations and about certain regulators’ “gotcha” enforcement approach. In response to those concerns, the agencies sometimes indicated that they reduced penalties in response to good faith efforts to comply, were not “aggressively” enforcing certain technical requirements, or were changing their enforcement approaches. We initially gathered the company concerns and agency responses between June 1994 and September 1996. We conducted our work for this review between February and October 1998 in the Washington, D.C., headquarters offices of each of the 11 departments and agencies that issued the regulations in accordance with generally accepted government auditing standards. During the preparation of this report, the agencies responsible for the regulations related to the company concerns reviewed and commented on our observations regarding each applicable concern. The agencies often offered suggestions regarding how the statutes and regulations should be characterized in the report, and we incorporated those suggestions where appropriate. The agencies ultimately concurred with our analysis in all 27 concerns. At the end of our review we sent a draft of this report for comment to the Director of the Office of Management and Budget (OMB). Executive Order 12866 states that OMB’s Office of Information and Regulatory Affairs (OIRA) is “the repository of expertise concerning regulatory issues, including methodologies and procedures that affect more than one agency . . . .” OIRA is also responsible for reviewing significant regulations before their publication as proposed and final rules and for approving agencies’ information collection requests under the Paperwork Reduction Act. On December 10, 1998, we met with the Acting Administrator of OIRA, who said he had no comments on the report. As shown in figure 1, we concluded that the statutory provisions underlying 13 of the 27 company concerns that we reviewed provided the agencies with no discretion in how the relevant regulatory provisions could be developed. We concluded that the statutory provisions underlying 12 of the remaining 14 concerns permitted the agencies some discretion in establishing regulatory requirements, and the provisions related to 2 concerns allowed the agencies broad rulemaking discretion. Table 1 shows the number of concerns at each level of discretion for each agency issuing the related regulations. For 13 of the 27 company concerns in this report, we concluded that the relevant statutory provisions allowed the agencies no discretion in how the related regulations could be developed. As discussed previously, we considered statutory provisions as allowing rulemaking agencies “no discretion” if they delineated the specific actions that regulated entities or the agencies themselves must take and did not allow the agencies to develop their own regulatory requirements. The following examples illustrate the types of statutory provisions that we concluded did not allow agencies any discretion in developing the relevant regulations. These and other examples of statutory provisions that did not appear to allow the agencies rulemaking discretion are discussed more fully in appendix II. Officials from Multiplex Company, Inc. said increased premium costs paid to PBGC to guarantee their employees’ pensions is costly for the company, rising from $2.60 per participant in 1982 to $19.00 per participant in 1994. PBGC officials said the agency’s insurance premiums are statutorily established in Section 4006 of the Employee Retirement Income Security Act (ERISA). We concluded that ERISA (codified at 29 U.S.C. 1001 et seq.) gave PBGC no discretion in setting the pension insurance rates at issue in this concern. Under the statute, the $19.00 rate is the minimum amount businesses with single-employer plans must pay for basic benefits. Multiplex Company, Inc. officials also said that IRS-required “nondiscrimination tests” for 401(k) thrift savings plans were of questionable value after IRS lowered the amount of money that could be contributed to the plans, thereby making it less likely that higher income employees would dominate the plan. IRS said that both the test, known as the actual deferral percentage test, and the limit on the amount that could be contributed to a 401(k) plan were required by statute. We examined the relevant statutory provisions and concluded that IRS had no discretion in how the regulations could be developed. The deferral percentage test and the deferral limit were both specifically established by statute. A subsequently enacted statute that reduced the amount that could be contributed to 401(k) plans did not eliminate the requirement that companies perform this test. An official from Bank A said that a FRB regulation on the availability of funds and the collection of checks (Regulation CC) requires information that is time consuming for banks to develop. Officials from FRB said that the Expedited Funds Availability Act requires depository institutions to provide written copies of their funds availability policies to their customers. We examined the act and concluded that it gave the agency no discretion in how its regulations could be written. The statute specifically requires depository institutions to provide their customers with preprinted slips describing their policies regarding the amount of time between when a deposit is made into a customer’s account and when funds can be withdrawn from that deposit. A Metro Machine Corporation official said that EPA regulators establish unrealistic requirements that are not attainable with current treatment technology. For example, the official said that federal water quality standards require that water the company discharges be made cleaner than rainwater. In its response to this concern, EPA said that under the Clean Water Act, it could not consider available treatment technologies or the cost of treatment in the development of water quality criteria for a particular designated use. We agreed that EPA had no discretion under the act regarding the role that cost or treatment technologies can play in establishing federal water quality criteria. For 12 of the 27 company concerns, we concluded that the underlying statutory provisions gave the agencies some discretion in how the associated regulations could be developed. As discussed previously, we considered rulemaking agencies to have “some discretion” if the statutory provisions delineated certain requirements that had to be included in the regulations but allowed the agencies at least some flexibility regarding other requirements. The following examples illustrate the types of statutory provisions that we concluded allowed agencies some discretion in developing the relevant regulations. These and other examples of statutory provisions that allowed some rulemaking discretion are discussed more fully in appendix III. An official from Bank A said that provisions of Regulation DD under the Truth in Savings Act requires the bank to disclose certain information to its customers in a single document. The bank officials said that they had been disclosing this information in a variety of brochures, but had to revise their brochures to disclose this information in one document to comply with Regulation DD. In response to this concern, officials at FRB said that the Truth in Savings Act required all depository institutions to disclose information about the rates paid and fees charged in a uniform manner. We concluded that the Truth in Savings Act gave FRB some discretion in how it could establish what became Regulation DD. Although the act gave FRB no discretion regarding the disclosures that must be required in Regulation DD, the act gave FRB discretion to determine how these disclosures should be made to bank customers. Officials from the paper company said DOT regulations that required hazardous materials (“hazmat”) training and testing cost the company $475,000 each year. According to DOT, the Hazardous Materials Transportation Uniform Safety Act specifically required the issuance of regulations requiring employers to provide hazmat training to certain employees. We examined the training requirements in the act and concluded that the act gave DOT no rulemaking discretion in some areas and some discretion in other areas. For example, the act said the Secretary of Transportation “shall prescribe by regulation requirements for training that a hazmat employer must give hazmat employees of the employer on the safe loading, unloading, handling, storing, and transporting of hazardous material.” The act also required the regulations to establish the date by which the training shall be completed and to require employers to certify that their hazmat employees have received training and been tested on at least one of nine specific areas of responsibility that are delineated in the statute. However, the statute also said that DOT’s regulations “may provide for different training for different classes or categories of hazardous material and hazmat employees.” Because the statute gave DOT the flexibility to tailor its regulatory requirements for hazmat training to different classes of hazmat materials and employees, we concluded that DOT had some rulemaking discretion. Officials from the fish farm said that pesticide manufacturers were either not renewing the aquatic use of certain pesticides or were not seeking EPA approval of the products for use in aquaculture because of the expense associated with the testing requirements in EPA’s reregistration program. EPA officials said that the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) requires EPA to certify that all pesticides meet current testing standards for safety. We concluded that these FIFRA provisions gave EPA some discretion regarding the requirements that manufacturers must satisfy in the pesticide reregistration process. Section 4 of FIFRA specifically states that the Administrator of EPA must reregister “each registered pesticide containing any active ingredient contained in any pesticide first registered before November 1, 1984,” and it prescribes in detail the approach EPA is to use to reregister pesticides. However, the statute gives the EPA administrator discretion in establishing the data requirements that would be needed to support the reregistration of the pesticides. These requirements can have a direct impact on the expense incurred by manufacturers in the reregistration process. We concluded that the statutory provisions underlying 2 of the 27 concerns gave the agencies broad discretion in how regulatory provisions could be developed. As noted previously, we coded statutory provisions as allowing rulemaking agencies “broad discretion” if the provisions contained few specific requirements or imposed few to no constraints on what had to be included in agencies’ regulations. In the first of the two concerns, Bank A officials said EEOC’s record retention standards were inconsistent with the way EEOC pursued cases. In response to the Bank’s concern, EEOC officials said that its record retention requirements were tied to the filing periods in each of the civil rights statutes. For example, EEOC officials said that because an employee could file a discrimination suit under the Equal Pay Act within either 2 or 3 years of the alleged discrimination, EEOC requires that related records be kept for 2 or 3 years. EEOC officials also said that under all of the statutes, when a claim of discrimination is pending, the employer must keep all relevant personnel records until final disposition of the charge or action. We concluded that the statutory provisions underlying EEOC’s record retention standards gave EEOC broad discretion in developing the standards because those provisions (1) do not specify how long employers must retain records and (2) give EEOC broad authority to establish retention periods. For example, the Equal Pay Act states that every employer must preserve records for such periods of time as the Administrator of EEOC “shall prescribe by regulation or order as necessary or appropriate for the enforcement of the provisions of this chapter or the regulations or orders thereunder.” Title VII of the Civil Rights Act of 1964, as amended, requires every employer to make such reports from its personnel records “as the Commission shall prescribe by regulation or order . . . .” Therefore, we concluded that EEOC had broad discretion in establishing record retention requirements. In the second of the two concerns, Bank B officials said that some banking regulations gave “nonbanks” (e.g., investment brokerage firms) an unfair competitive edge in the marketplace. For example, the officials said that one regulation required banks (but not investment firms) to disclose the risks associated with certain investment products. In their 1996 response to this concern, OCC officials said that the examples of competitive inequality cited by the bank officials “are due to the fact that banks and nonbanks operate under different statutory schemes.” During this review they explained that under these statutes, banks are subject to a different regulatory scheme than nonbanks because they are federally insured. Therefore, they said it is appropriate for banking agencies to adopt additional disclosure requirements that address the unique features of the banking industry. In 1994, OCC and the other banking agencies issued an interagency policy statement at their own initiative requiring the disclosures that Bank B found burdensome under their general statutory authority to issue rules and regulations. Because (1) the statutes give OCC and the other banking agencies authority to take whatever actions they believe are necessary to remedy or prevent unsafe and unsound banking practices (see 12 U.S.C. 1818), and (2) the disclosures required in the policy statement appear related to that end, we believe that OCC had broad discretion to issue the policy statement requiring the disclosures at issue in this concern. Appendix IV contains our detailed analysis of the statutory and regulatory provisions relating to both of the concerns for which we concluded the agencies had broad rulemaking discretion. Our second objective was to determine whether the regulatory requirements at issue in each of the 27 company concerns were within the authority granted by the underlying statutes. We concluded that the regulatory provisions related to all of the concerns were within the authority granted by those statutes. For the 13 concerns in which we concluded the underlying statutes gave the agencies no rulemaking discretion, the language in the agencies’ regulations either mirrored the language in the statutes or was substantively consistent with the statutory requirement. Therefore, we concluded that the regulations were within the authority granted by the statutes. For example, in relation to a concern from Zaclon, Inc., regarding a permit application under the Resource Conservation and Recovery Act (RCRA), we compared the relevant RCRA statutory provisions with EPA’s regulations and found that the language in the regulations mirrored the language in the statute. The RCRA provisions (codified at 42 U.S.C. 6925(a)) required the EPA Administrator to “promulgate regulations requiring each person owning or operating an existing facility or planning to construct a new facility for the treatment, storage, or disposal of hazardous waste . . . to have a permit issued pursuant to this section.” EPA’s RCRA regulations (40 C.F.R. 270.1(c)) directly quote the statute’s requirements that a permit is needed for the “treatment,’ ‘storage,’ and ‘disposal’ of any ‘hazardous waste’” and goes on to require companies to obtain such permits. Because the regulatory provisions reflected the specific statutory requirements, we concluded that those provisions were within the authority granted by the statutes. We reached a similar conclusion with regard to a concern from Multiplex Company, Inc., involving what it referred to as IRS’ “nondiscrimination tests” for companies’ 401(k) thrift savings plans. We compared the nondiscrimination test provisions in the tax code with IRS’ regulations and concluded that the regulations essentially mirror the statutory provisions and add some explanatory language. The statute (codified at 26 U.S.C. 401(k)(3)(A)(ii)) specifically requires the test and establishes specific dollar amounts for deferral limits and detailed procedures that companies must follow. For example, the statute says that the “actual deferral percentage for the group of eligible highly compensated employees is not more than the actual deferral percentage of all other eligible employees multiplied by 1.25.” The related IRS regulations (26 C.F.R. 1.401(k)-1(b) and 1.402(g)-1) repeat these statutory requirements word for word. We concluded that the regulations underlying the 12 concerns for which the agencies had some rulemaking discretion were within the authority granted by the related statutes because they (1) contained the elements required by those statutes and/or (2) did not exceed the authority granted or limits imposed by those statutes. For example, we concluded that the Expedited Funds Availability Act allowed FRB some discretion in developing the regulation (Regulation CC) that established the periods during which banks could hold funds before making them accessible to depositors. Although the act gave FRB no discretion regarding the maximum number of days banks could hold particular types of deposits, it allowed FRB to establish hold periods that were less than those maximums or to standardize those time periods. Regulation CC established hold periods that were consistent with the maximum periods specified in the statute. Therefore, we concluded that the regulation was within the authority granted by the statute. For the two concerns in which we concluded that the underlying statutes gave the agencies broad rulemaking discretion, the statutes contained language that allowed agencies to develop the rules they believed were necessary to carry out their statutory missions. We viewed regulations that agencies developed to carry out their statutory responsibilities as being within the authority of those statutes. For example, we concluded that EEOC had broad discretion under the various civil rights statutes to impose record retention requirements. Therefore, we also concluded that EEOC’s practice of establishing requirements closely related to the filing periods of each statute was within the authority granted by those statutes. Appendixes II, III, and IV describe our analyses of the relevant regulations for all of the concerns that we categorized as allowing no discretion, some discretion, and broad discretion, respectively. Our third objective was to determine whether the rulemaking agencies could have developed regulatory approaches that would have been less burdensome to the regulated entities while still meeting the underlying statutory requirements. We concluded that in relation to 13 of the 27 concerns, the agencies could not have developed less burdensome regulatory approaches. For the remaining 14 concerns, we could not determine whether less burdensome regulatory approaches were available to the agencies without substantial additional information about how the current approaches were being implemented or how alternative approaches would be perceived by regulated entities. We believe that an agency cannot develop regulatory requirements that are less burdensome to a regulated entity if (1) the statute underlying the regulation gives the agency no discretion regarding how regulatory provisions can be developed, and (2) the agency develops regulations that are consistent with (and sometimes mirror images of) the statutory requirements. Because 13 of the 27 concerns met these criteria, we concluded that the agencies involved in the concerns could not have developed less burdensome regulatory provisions. For example, in one of these concerns, officials from a paper company said that EPA regulations under title V of the Clean Air Act were problematic because they regulated extremely low levels of emissions. We concluded that under title V, EPA had no discretion regarding the development of regulations on the emissions levels that trigger the permitting requirements because the statute specifically requires any “major source” of hazardous air pollutants to obtain a title V permit and defines a major source as any source that emits 10 tons or more a year of any hazardous air pollutant or 25 tons or more per year of a combination of pollutants. EPA’s regulations implementing title V are similar to the statutory language and specifically refer to the definition of “major source” in the United States Code. Therefore, we concluded that there was no less burdensome regulatory approach that the agency could have selected that would have met the requirements of the statute. (See app. II for a discussion of all of these concerns.) For the remaining 14 concerns in which we concluded the underlying statutes gave the agencies some or broad discretion, we could not determine whether a less burdensome regulatory approach was available. To make such a determination in each of these cases, we would have had to do an in-depth review of how the current regulations were being implemented at each agency or how alternative approaches would be viewed by regulated entities. For example, in one of the concerns, a Bank A official complained about the time and effort required to complete call reports that summarize bank operations. We concluded that the various statutes that require or authorize the banking agencies to collect information through the call reports gave the agencies some discretion in drafting the relevant regulatory provisions. However, we could not determine whether the banking agencies could have developed less burdensome requirements without conducting a detailed review of each of the nonstatutory data elements in the call reports and their consistency with the requirements in the statute. This type of detailed analysis would have required significant time and resource commitments that were beyond the scope of this review. (See app. III and IV for a discussion of all of these concerns.) For 2 of these 14 concerns, the agencies appeared to have discretion to develop alternative regulatory approaches that may have addressed an aspect of the companies’ original concerns. However, we also concluded that the regulated entities might not have viewed those alternatives as less burdensome than the approach that the agencies took. For example, in one of the concerns a Bank A official said that Regulation CC required the development and maintenance of expensive and time-consuming information about the current availability of funds. In response, FRB officials indicated that Regulation CC’s requirements were based on the Expedited Funds Availability Act, which establishes different minimum hold periods for different types of deposits (e.g., deposits of local versus nonlocal checks). They said that to ensure compliance with this act, banks must have a system for tracking those deposits. We examined the act and concluded that it allowed FRB some discretion to establish hold periods for various types of deposits. For example, the act said that the hold period for nonlocal checks could not be more than 5 days, but it allowed FRB to establish hold periods that were less than the maximum period. However, FRB’s Regulation CC established a 5-day hold period for nonlocal checks. To reduce Bank A’s burden of having to track holds on different types of deposits, FRB could have established a standard hold period in Regulation CC for all types of deposits—e.g., 1 day for all types of deposits—that was still consistent with the statutory requirements. However, it is unclear whether banks would welcome a standard 1-day hold requirement because it would reduce the amount of time available to the banks to determine whether sufficient funds existed to cover all categories of checks. In the other concern, we concluded that EEOC’s practice of establishing personnel record retention requirements related to the length of the filing periods of the particular civil rights statutes was within the broad rulemaking authorities granted by those statutes. However, EEOC could also have used its discretion to establish uniform record retention requirements (e.g., 5 or 10 years) for all of the statutes instead of the variable periods for the different statutes. Although this approach could have helped eliminate what the company viewed as an inconsistency between the requirements and the way EEOC pursues cases, it is not clear whether regulated entities would view a record retention requirement that is longer than the current requirement as being less burdensome. Our review focused on a limited set of issues. It did not attempt to assess the amount of discretion that federal agencies had in enforcing the requirements at issue in the companies’ concerns or whether those requirements were, in fact, burdensome. The review focused on 27 regulatory concerns from 10 companies that the agencies issuing the regulations indicated were based on the underlying statutes. Therefore, the results of our review cannot be viewed as being representative of all regulatory concerns, all regulations or statutes, or even all of the concerns that the companies mentioned during our initial 1996 study. In fact, it is important to remember that for about three-fourths of the companies’ original 125 concerns, the responding agencies did not indicate that the concerns were based on the statutory requirements underlying their regulations. On the other hand, although our review focused on 27 regulatory concerns that the agencies said were, at least in part, statutorily based, the companies in our 1996 study mentioned 6 other concerns that centered on the statutes themselves, not the regulations. For example, officials from one company said that compliance with the Comprehensive Environmental Response, Compensation, and Liability Act (not EPA’s CERCLA regulations) was expensive and exposed the company to unforeseen liability. These statute-directed concerns suggest that the companies understood the degree to which their problems were traceable to the statutes. Also, the comments that the companies made during our 1996 study were similar in many respects to comments made by companies in some of our previous reports and in the literature. Therefore, we believe that the companies’ comments are not atypical, and our analysis of the regulations and statutes underlying those concerns can offer some insights into how regulatory concerns arise and how they can best be addressed. For about half of the concerns that we reviewed, we concluded that the statutory provisions underlying the regulations that companies perceived as problematic gave the agencies no discretion in how they could develop those regulations. Some of the statutory provisions specifically delineated the actions regulated entities had to take and therefore limited rulemaking agencies’ discretion regarding what their regulations could require. As a result, the agencies often mirrored the language of the statutes in their regulations. We therefore concluded that the agencies’ regulations were within the authority granted by the underlying statutes and represented the least burdensome option permitted by those statutes. Nevertheless, during our 1996 review the companies told us that the requirements for these 27 concerns were burdensome. The statutes underlying other company concerns gave the agencies some or broad discretion in developing associated regulatory provisions. In these cases the agencies appeared to have developed regulations that were within the authorities permitted by or the limitations of the statutes. However, we could not determine whether the agencies could have developed less burdensome regulatory alternatives with regard to these concerns because to do so would have required detailed information about how the current requirements were being implemented and/or how alternative regulatory approaches would be perceived by regulated entities. In two of these cases, we concluded that the agencies could have developed alternative regulatory requirements that may have addressed some aspects of the companies’ concerns. However, even in those cases, the regulated entities may not have perceived these alternative actions as less burdensome than the actions the agencies took. Different perspectives exist regarding the amount of discretion that Congress should give agencies to establish regulatory requirements. Some observers believe that giving agencies broad discretion to develop regulations represents an abrogation of Congress’ legislative responsibilities and is an open invitation for agencies to impose burdensome requirements on the public. They contend that Congress should closely direct agencies’ regulatory efforts through narrowly defined statutory requirements. However, other observers believe that some statutory requirements may be to blame for certain types of regulatory burden. In those cases in which Congress has specifically required certain actions or limited agencies’ rulemaking discretion, the agencies are precluded from considering the most cost-effective approaches. Our review indicated that regardless of how much or how little rulemaking discretion is permitted in the underlying statutes, the associated regulations can still be regarded as burdensome by regulated entities. For 13 of the 27 company concerns that we examined, Congress gave the regulatory agencies no discretion in how the relevant regulatory provisions could be developed. Although the statutes specifically delineated the requirements that should be imposed, the companies considered those requirements to be burdensome. In the statutes underlying the other 14 concerns, Congress gave the regulatory agencies some or broad rulemaking discretion. Although the agencies’ regulatory requirements were within the authority granted by the relevant statutes, the companies again viewed the requirements as burdensome. Also, it is unclear whether alternative regulations could be developed that would be perceived as less burdensome. Efforts to reduce regulatory burden and reform the regulatory process are often based on the belief that agencies’ rulemaking actions must be carefully limited. Several of the executive and legislative branch regulatory reform efforts during the past 20 years have directed federal agencies to conduct cost-benefit or regulatory flexibility analyses for certain regulations to ensure that those rules impose as little burden as possible on the regulated public. When the statutes directing or authorizing agencies to develop regulations give those agencies discretion as to the regulatory approach that they can take and the particular requirements that can be imposed, analytical requirements imposed on the agencies (e.g., cost-benefit analysis and regulatory flexibility analysis) can help ensure that they consider all available regulatory options and select the least burdensome option. However, when the statutes underlying those regulations give agencies no discretion in how their regulations can be developed, analytical requirements imposed on the agencies are unlikely to have much direct effect on the regulatory burden that those agencies impose. Agencies cannot adopt regulatory alternatives that are outside the boundaries permitted in the underlying statutes. If a statute underlying a regulation is the source of a company’s regulatory concern, that concern can be addressed only by changes in the statute. Similarly, if Congress disapproves of a regulation pursuant to its authority under SBREFA because of requirements that are based on the underlying statute, sending the regulation back to the issuing agency for further consideration will not resolve the issue. If a statute established the conditions that Congress finds objectionable, only Congress can address the problem by changing that statute. Nevertheless, analytical requirements imposed on agencies can serve a useful purpose even when the underlying statutes give the agencies no rulemaking discretion. For example, cost-benefit analysis can highlight the potential advantages of alternative regulatory approaches not permitted in the underlying statutes, perhaps leading to eventual changes in those statutes and thereby alleviating at least some of the burden felt by the regulated entities. We are sending copies of this report to the Ranking Minority Member of the House Judiciary Committee’s Subcommittee on Commercial and Administrative Law; the Director of OMB; the Secretaries of Health and Human Services, HUD, Department of Labor, DOT, and the Treasury; the Comptroller of the Currency; the Administrator of EPA; EEOC; FDIC; FRB; and PBGC. We will also make copies available to others on request. Major contributors to this report are listed in appendix V. Please contact me on (202) 512-8676 if you or your staff have any questions concerning this report. This review focuses on a subset of the 125 regulatory concerns that companies cited in our 1996 reports—the concerns that federal agencies indicated were, at least in part, based on the statutes underlying the relevant regulatory provisions. Our objectives were to determine, for each such concern, (1) the amount of discretion the underlying statutes gave the agencies in developing the regulatory requirements that the agencies had said were attributable to the underlying statutes, (2) whether the regulatory requirements at issue were within the authority granted by the underlying statutes, and (3) whether the rulemaking agencies could have developed regulatory approaches that would have been less burdensome to the regulated entities while still meeting the underlying statutory requirements. In 1996, the agencies indicated that 31 of the 125 company concerns were, at least in part, statutorily based. However, we eliminated eight of those concerns from this review because the companies were not expressing concerns about federal agencies’ regulatory requirements. Two of the eight concerns were very broad, asserting that “frequent changes to the tax code are costly” and that doing business in multiple states was difficult because of differences in state laws. The other six concerns involved particular federal statutes but did not focus on agencies’ regulatory requirements. For example, in one of the six concerns company officials said that compliance with the requirements in the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) was expensive and exposed the company to unforeseen liability. However, the officials did not cite any particular Environmental Protection Agency (EPA) regulations in their concern. Another of the six concerns focused on the potential liability that officials from one company said its managers faced with regard to certain environmental standards. In its response to that concern, EPA indicated that criminal penalties for the violation in question were established in a particular statute rather than in EPA’s regulations. We eliminated this concern from our review because EPA is not responsible for enforcing those provisions in criminal law, and the issues in the concern were not associated with EPA regulations. We eliminated another company’s concern from this review because the agency that had issued the underlying regulations no longer contended that the concern was statutorily based. In its 1996 response to a concern that one company described as EPA’s “antidegradation policy,” EPA said that the company was actually referring to the agency’s “antibacksliding” requirements that were statutorily mandated by the Clean Water Act. However, an EPA official told us during this review that (1) the company concern was, in fact, about antidegredation; and (2) the policy was adopted by the State of Ohio, not EPA, and was not based on federal environmental statutes. We therefore eliminated this concern from our review. We subdivided 4 of the remaining 22 concerns into 9 separate concerns in order to facilitate our analysis. For example, one such concern involved three separate provisions of Regulation DD, which was issued by the Board of Governors of the Federal Reserve System (FRB) to implement provisions of the Truth in Savings Act. By dividing this concern into three separate concerns, we were able to assess each of the provisions individually. One of the other concerns that we subdivided focused on what one company viewed as a disparity in federal regulations requirements between banks and nonbanks (e.g., an investment brokerage firm) regarding (1) flood insurance and (2) public disclosure requirements. We subdivided this concern into two concerns to focus on the requirements for flood insurance and disclosure requirements separately. After eliminating some company concerns and separating others into multiple parts, what remained were 27 concerns about federal regulations that the agencies indicated were, at least in part, based on the underlying statutes. These 27 concerns were raised by officials from 10 of the 15 companies we visited during the preparation of our 1996 reports. In 1996, many of the companies asked that their identities not be disclosed during our discussions with regulators or in our reports. As a result, we used generic descriptors in the 1996 reports to identify those companies. We maintained the same policy in this report, using generic descriptors for 7 of the 10 companies and identifying the remaining 3 companies by name. Table I.1 shows the name or generic descriptor and the number of concerns analyzed in this report for each of the 10 companies. Multiplex Company, Inc. Zaclon, Inc. A total of 11 federal departments and agencies issued the regulations underlying the 27 company concerns at issue in this report. Table I.2 shows the number of those concerns that were applicable to each of the 11 departments or agencies. To address our first objective regarding the amount of discretion the underlying statutes gave the agencies in developing regulatory requirements, we first had to identify the regulatory provisions at issue in the company concerns and the underlying statutory requirements for those provisions. For most of the concerns, either the company or the responding agency provided relevant statutory and/or regulatory citations. However, for other concerns we had only limited information and had to contact the relevant agencies for additional details. For example, in one concern, Bank A said it was frustrating to spend the time and resources comply with so many bank reporting requirements but did not cite any specific relevant regulations or statutes. In its response to this concern, FDIC said that some bank reporting requirements were mandated by statute, but it did not provide any examples of those requirements to support its statement. During this review, we asked FDIC to identify the specific regulatory reporting requirements that it considered to be statutorily mandated and to provide the relevant statutory citations. We then reviewed the statutory provisions underlying each of the company concerns and coded the level of discretion that we believed those provisions permitted the agencies in developing the specific regulatory requirements at issue in the concerns into one of three categories— “no discretion,” “some discretion,” or “broad discretion.” We coded statutory provisions as permitting “no discretion” if they delineated specific actions that regulated entities or the agencies themselves must take and did not allow the agencies to develop the regulatory requirements at issue in the concern. For example, using a hypothetical illustration unrelated to any of the concerns in this report, assume that a company raised a concern about what it viewed as a burdensome recordkeeping requirement that EPA imposed regarding its recycling efforts. If a statutory provision required companies with 100 or more employees to provide recycling information to EPA on January 30 of each year delineating, for the previous calendar year and for each company work site, (1) the specific materials that were recycled, (2) the manner of recycling, and (3) the costs associated with their recycling efforts, we would have coded the provision as allowing EPA no rulemaking discretion. We coded statutory provisions as allowing rulemaking agencies “some discretion” if they delineated certain requirements that had to be included in the agencies’ regulations but gave the agencies at least some discretion regarding other requirements. For example, in the above illustration, if the statute gave EPA discretion regarding the timing or the frequency with which recycling information had to be provided by the companies, but EPA still had no discretion regarding the content of the reporting requirement, we would have coded the statutory provision as allowing some rulemaking discretion. We coded statutory provisions as allowing the rulemaking agencies “broad discretion” if the provisions contained few specific requirements or imposed few to no constraints on what the agencies had to include in their regulations. In the hypothetical recycling example, if the statutory provision only required EPA to periodically report to Congress on businesses’ recycling efforts, we would have coded the provision as allowing EPA broad rulemaking discretion. In this scenario, EPA could unilaterally decide what information to collect, from which businesses to collect the information, and the timing and frequency of companies’ reporting requirements. To address our second objective, we compared the relevant statutory and regulatory provisions for each concern and decided whether we believed the regulatory requirements at issue in the concerns were within the authority granted by the underlying statutes. We coded the regulatory provisions as being within the authority granted by the statutes if (1) the statutory provisions gave the agency no discretion in how the regulations could be developed and the regulatory provision strictly adhered to the statutory requirements; or (2) the statutory provisions gave the agency some or broad discretion, and the regulatory language was consistent with the requirements or the limitations in the statutes. For example, if the relevant statutory provision in the above recycling illustration allowed EPA to establish whatever reporting requirements it “deemed necessary” to determine the status of companies’ recycling efforts, we would have considered almost any regulatory reporting requirements that EPA established as being within the authority granted by the statute. However, if the statutory provision said EPA could collect information from companies no more than twice annually but the regulation established quarterly reporting requirements, we would have considered the regulatory requirements outside of the authority granted by the statute. Our third objective was to determine whether the rulemaking agencies could have developed regulatory approaches that would have been less burdensome to the regulated entities while still meeting the underlying statutory requirements. We considered agencies to have been unable to develop less burdensome regulatory approaches if the underlying statutes gave the agencies no rulemaking discretion and the agencies adopted regulations that strictly adhered to the statutory requirements. If the underlying statutes gave the agencies some or broad rulemaking discretion, we were not able to determine if the agencies could have developed a less burdensome approach. To do so, we would have needed detailed information on how the agencies’ regulations were being implemented and how alternative approaches would be perceived by the regulated entities in order to determine whether a less burdensome approach was available. As discussed in Appendices III and IV, that information was not readily available. Because this review is based on a subset of the company concerns and agency responses originally presented in our 1996 reports, several of the limitations discussed in those reports are also applicable to this report. As we noted in our November 1996 report, the companies from whom we initially gathered the concerns were generally those that (1) were identified by interest groups, identified by officials from the Small Business Administration, or were in the literature; and (2) were willing to participate in our review. Therefore, neither the companies’ concerns nor the results of our analysis are generalizable to other companies or to other regulatory issues. The results of this analysis are not even generalizable to all of the original 125 regulatory concerns because this review focuses only on the subset of the concerns and related regulations that the agencies indicated were, at least in part, statutorily based. However, as we pointed out in our November 1996 report, the companies’ comments were similar in many respects to comments made by companies in some of our previous reports and in the literature. Therefore, we believe that the companies’ comments, the agencies’ responses, and our analysis of the related regulations and statutes are not atypical and can provide some insights regarding the broader issues addressed in this report. In preparing both of the 1996 reports and during this review, we did not collect information from individuals and organizations outside of the companies and federal agencies responsible for the regulatory issues mentioned by the companies. For example, we did not obtain information from labor unions or other employee organizations about the regulations the companies mentioned. Neither did we collect information from individuals and organizations that were the potential beneficiaries of the regulations cited by the companies as being problematic. Collecting the views of all such organizations for all the regulations and statutes cited in the 1996 reports and this report would have been very time consuming, if not impossible. Therefore, as was the case in the 1996 reports, this report does not reflect the full range of opinions that may exist regarding the issues raised during the reviews. However, this report reflects the views of the two stakeholder groups in which we were most interested—the elements of the regulated community that raised these concerns and the agencies that issued the underlying regulations. Our approach in the 1996 reports was to present the views of both the businesses and the agencies without attempting to resolve the many differences in perspectives and interpretation that arose between the two groups. We followed the same approach in this review, and we did not attempt to determine whether the companies’ or the agencies’ views were correct with regard to issues that were outside of the scope of this review. For example, one company said that certain IRS-required tests were of questionable value to the agency in determining whether thrift savings plans were being fairly administered. We focused our analysis on whether the tests were (as IRS contended) statutorily required, not on whether they were of value to IRS. Also, we did not attempt to determine whether any of the agencies’ actions were, in fact, “burdensome.” The report focuses primarily on the amount of discretion that the relevant statutes gave rulemaking agencies in developing the regulatory requirements at issue in the companies’ concerns. However, the report does not address the amount of discretion that the agencies had in writing regulations outside of the specific issues raised by the companies. Agencies may have broad discretion in how regulations can be developed within a general area, but little or no discretion with regard to particular issues within those areas. Also, the report does not address enforcement issues. As our 1996 reports indicated, agencies may have considerable discretion in carrying out their enforcement authority, and the use of that discretion can significantly affect the burden felt by regulated entities. For example, several companies expressed concerns about rigid and inflexible regulations and about certain regulators’ “gotcha” enforcement approach. In response to those concerns, the agencies sometimes indicated that they reduced penalties in response to good faith efforts to comply, were not “aggressively” enforcing certain technical requirements, or were changing their enforcement approaches. We approached our review objectives systematically. First, we developed a coding scheme for each objective to ensure consistency of analysis. Multiple staff members then analyzed the issues related to each concern, reviewed the statutory and regulatory requirements, and agreed on how each concern should be coded. However, determining how much discretion a statute gives a rulemaking agency, whether a regulation is within the authority granted by the underlying statute, and whether less burdensome regulatory approaches could have been developed are ultimately matters of judgement. Therefore, our conclusions should be viewed in that light, not as determinations that have a legally binding effect on the agencies issuing the rules or the regulated community. We initially gathered the company concerns and agency responses between June 1994 and September 1996. In this review, we analyzed the statutory and regulatory provisions as they existed between 1994 and 1996. If a statutory or regulatory provision changed after this period, we noted those changes in this report. We conducted our work between February and October 1998 in the Washington, D.C., headquarters offices of each of the previously identified agencies in accordance with generally accepted government auditing standards. One of the objectives of our review was to determine, for each of 27 company concerns, the amount of discretion the underlying statutes gave rulemaking agencies in drafting the regulatory requirements that the agencies said were attributable to the underlying statutes. The agencies that issued those requirements indicated in two of our 1996 reports that the concerns could, at least in part, be traced to statutory requirements underlying their regulations. In this review we concluded that the statutory provisions underlying 13 of the 27 concerns gave the rulemaking agencies no discretion in how the related regulatory requirements could be drafted. We coded statutory provisions as allowing agencies “no discretion” if they delineated specific actions that regulated entities or the agencies themselves must take and did not allow the agencies to develop their own regulatory requirements. This appendix provides our detailed analysis of each of these 13 company concerns. Specifically, for each such concern it provides the following information: (1) the portion of the concern in our 1996 reports that the agency or agencies indicated was statutorily based, (2) the portion of the agency response in our 1996 reports that indicated the concern was statutorily based, (3) our analysis of the amount of rulemaking discretion the relevant statutory provisions gave the agencies (the first objective of our review), (4) our analysis of whether the regulatory requirements at issue in the concern were within the authority granted by the underlying statutes (the second objective of our review), (5) our analysis of whether the rulemaking agencies could have developed regulatory approaches that would have been less burdensome to the regulated entities while accomplishing the underlying statutory objectives (the third objective of our review), and (6) the main purpose of the underlying statutes (where such purpose statements were available). Appendix I of this report contains a detailed discussion of our scope and methodology. A Metro Machine Corporation official said that EPA regulators establish regulations that are not relevant to the industry and establish unrealistic requirements that are not attainable or verifiable with current treatment technology and measurement systems. For example, the official said that federal water quality standards require that the water the company discharges be made cleaner than rainwater. The official also said that up to 90 percent of pollution reduction generally can be achieved with reasonable costs, but the last 10 percent of pollution reduction is very difficult or costly (sometimes up to double the cost) because the needed technology is either not available or very expensive. EPA officials noted that Metro Machine Corporation is located in Virginia and said that the State of Virginia establishes water quality standards for state waters. They also said that the State of Virginia is authorized to administer the National Pollutant Discharge Elimination System (NPDES) program related to this concern. Under the standard-setting process, EPA officials said that states initially establish the “designated use” or water quality goal for individual bodies of water to protect aquatic life and human health. Once states make those designations, they typically adopt EPA-developed water quality criteria to support the designated use. EPA officials said the Clean Water Act stipulates that EPA cannot consider available treatment technologies or the cost of treatment in the development of water quality criteria. EPA officials also noted that, in certain cases, air pollution carried to earth by rainwater may cause surface water to be harmful to aquatic life and/or human health. Because Virginia’s water quality criteria are designed to protect aquatic life and human health, the criteria may indeed be more restrictive than for polluted rainwater in certain instances. However, Virginia has the option of providing economic relief in its water quality standards, where justified by the State and approved by EPA, through modification of its goals for a water body or by providing a water quality-based variance for specific discharges. The issue that we focused on in this concern is EPA’s assertion that it cannot consider cost or available treatment technologies when it establishes water quality criteria under the Clean Water Act. Although the State of Virginia had discretion in establishing the designated use for the body of water at issue in the concern, we believe EPA had no discretion to consider cost or available treatment technologies in developing water quality criteria pursuant to the Clean Water Act (codified at 33 U.S.C. Chapter 26). Under the statute (33 U.S.C. 1313(c)(2)), water quality standards consist of designated uses for the body of water involved (e.g., public water supplies or recreation) and water quality criteria. Water quality criteria provide technical information on the effects of pollution on water quality and frequently identify what maximum safe concentrations of pollutants would be to protect particular designated uses. The statute (33 U.S.C. 1314 (a)(1)) also says that the EPA Administrator must develop and publish criteria for water quality “accurately reflecting the latest scientific knowledge (A) on the kind and extent of all identifiable effects on health and welfare . . . ; (B) on the concentration and dispersal of pollutants, or their byproducts . . . ; and (C) on the effects of pollutants on biological community diversity, productivity, and stability . . . .” The statute also requires the Administrator to develop and publish information “on the factors necessary to restore and maintain the chemical, physical, and biological integrity” of water. The Clean Water Act sets forth EPA’s responsibilities and the factors that it must consider in the development of water quality criteria. Because the consideration of costs and available treatment technologies are not among those factors, we do not believe that EPA could consider costs or technology limits in developing water quality criteria pursuant to the act. We believe that EPA’s regulatory provisions delineating the factors that states should consider in establishing water quality standards (codified at 40 C.F.R. Part 131) are within the authority granted by the Clean Water Act. According to those regulations (40 C.F.R. 131.10(a)), in establishing such standards, states must "take into consideration the use and value of water for public water supplies, protection and propagation of fish, shellfish, and wildlife, recreation in and on the water, agricultural, industrial, and other purposes including navigation." Subsection 131.10(b) of the regulation also says, "the State shall take into consideration the water quality standards of downstream waters" and shall ensure that the water quality standards that will be established provide for the attainment and maintenance of the standards for the downstream waters. Also, 40 C.F.R. 131.11 (a)(1) says that states must adopt water quality standards that protect the designated use and “must be based on sound scientific rationale and must contain sufficient parameters or constituents to protect the designated use.” Because these regulatory requirements essentially mirror or are logically related to the requirements in the Clean Water Act regarding the establishment of water quality standards, we believe the requirements are within the authority granted by the Clean Water Act. We do not believe that EPA could have developed less burdensome water quality criteria by taking cost or treatment technology into account and still meet the requirements of the Clean Water Act. The regulatory requirements regarding the establishment of water quality standards either mirrored the statutory provisions or were logically related to those provisions. According to 33 U.S.C. 1251(a), the purpose of the Clean Water Act is to restore and maintain the chemical, physical, and biological integrity of the nation's waters. Zaclon, Inc. officials said the company was appealing a fine for failure to respond on time to an EPA letter asking them for information related to the Resource Conservation and Recovery Act (RCRA). They said EPA fined them without any follow-up or other communication regarding the original request. The officials also said they were disturbed that the fine was imposed on them because of a procedural matter (failing to file information) rather than something that had a real environmental impact. EPA officials said the agency sent Zaclon, Inc. a certified letter, which the company acknowledged receiving, notifying the company of its responsibility to either file a RCRA permit application for a hazardous waste pile at a facility that the company had acquired, or submit a demonstration of equivalency indicating that the waste pile had been “clean closed.” EPA officials said that the agency initially proposed assessing a penalty against the company of approximately $81,000. However, after discussions with the company, EPA later reduced the penalty to $37,600. EPA officials said the obligation to obtain either the permit or demonstrate that the waste pile has been “clean closed.” They also said this is not a “procedural matter.” They said this is a substantive requirement to ensure that hazardous waste management units are designed and operated to prevent releases of hazardous waste. The officials also said that under RCRA, companies have a positive obligation to comply even if EPA does not issue any reminders of their responsibility. The issues that we focused on in this concern are EPA’s assertions that RCRA requires the company to obtain a hazardous waste permit and to comply with the statutory requirement in the absence of a notice from EPA. We believe that RCRA gave EPA no discretion in how it could draft its regulations requiring a hazardous waste permit. The statute (42 U.S.C. 6925(a)) says that the EPA Administrator must promulgate regulations requiring each person owning or operating an existing facility or planning to construct a new facility for the treatment, storage, or disposal of hazardous waste to have a permit. It also states that the treatment, storage, or disposal of any such hazardous waste and the construction of any new facility for the treatment, storage, or disposal of hazardous waste is prohibited except in accordance with such a permit. Therefore, EPA had no discretion in drafting its regulations about requiring a permit for those facilities in existence or under construction that treat, store, or dispose of hazardous waste. Also, the statute does not indicate that EPA is required to notify companies of their responsibility to obtain a RCRA permit. We believe that EPA’s regulations requiring a RCRA permit are within the authority granted the agency by the statute. The regulations (40 C.F.R. 270.1(c)) require companies to obtain a RCRA permit for the treatment, storage, or disposal of hazardous wastes identified or listed in 40 C.F.R. 261. The regulation also says that owners and operators of hazardous waste management units must have permits during the active lives of the units, including the closure period. Because these regulatory provisions closely follow the statutory language in 42 U.S.C. 6925(a), we believe that EPA’s regulations are within the authority granted by the statute. We do not believe that EPA could have developed a less burdensome regulatory approach for its RCRA permit process while still meeting the underlying statutory requirements. RCRA gave the agency no discretion in drafting the regulatory requirements at issue in this concern, and those requirements closely followed the requirements in the statute. RCRA does not contain a statement of purpose. Officials from the paper company said that regulations under Title V of the Clean Air Act (CAA) are problematic because they regulate extremely low levels of emissions. They said that they are required to get a title V permit for methanol emissions that, at the company's fence line, are no more concentrated than the methanol in a person's breath. According to EPA, the emission levels that trigger Title V coverage are specified in CAA, ranging from 10 to 100 tons of emissions per year depending on the pollutant and/or the location of the emissions' sources. Companies capable of emissions above these levels are called "major" sources under the act, triggering title V permitting requirements. For hazardous air pollutants, EPA said that title V coverage is triggered by annual emissions of 10 tons of a given pollutant or 25 tons or more of a combination of pollutants. EPA also said that although specific information about the company was not provided, a typical paper mill emits about 600 tons per year of hazardous air pollutants other than methanol, including approximately 20 of the 189 hazardous air pollutants listed in CAA. The issue that we focused on in this concern is EPA’s assertion that Title V of CAA establishes the level of emissions of hazardous air pollutants that subjects a company to permit requirements. We believe that CAA (codified at 42 U.S.C. 7401 et seq.) gave EPA no discretion in developing its regulations regarding the emissions levels that trigger title V permitting requirements (codified at 42 U.S.C. 7661-7661f) when those emissions are above a certain level. The act requires any “major source” of hazardous air pollutants to obtain a title V permit, and defines a major source in 42 U.S.C. 7412(a)(1) as “any stationary source or group of stationary sources located within a contiguous area and under common control that emits or has the potential to emit considering controls, in the aggregate, 10 tons per year or more of any hazardous air pollutant or 25 tons per year or more of any combination of hazardous air pollutants.” Methanol is specifically listed in 42 U.S.C. 7412(b)(1) as a hazardous air pollutant, so a company would have to obtain a title V permit if it emitted 10 tons of methanol per year or more. However, a company could also be required to obtain a permit if it emitted no methanol but emitted 10 tons of any other hazardous air pollutant or 25 or more tons of any combination of covered pollutants. We believe that EPA’s regulatory provisions regarding the emissions levels that trigger the title V permitting requirements are within the authority granted by CAA. The regulation (40 C.F.R. 70.2) defines a “major source” that is required to have a permit by specifically referencing the statutory definition of the term in 42 U.S.C. 7412(a)(1). By using the same definition of a major source, EPA’s regulations are consistent with CAA’s requirements regarding the emissions levels that trigger title V permit requirements, and therefore they are within the authority granted by the statute. We do not believe that EPA could have developed a less burdensome regulatory approach while still meeting the underlying requirements of CAA. The act gave the agency no discretion in drafting the regulatory requirements at issue in this concern, and those requirements were consistent with the requirements in the statute. “ (1) to protect and enhance the quality of the Nation’s air resources so as to promote the public health and welfare and the productive capacity of its population; (2) to initiate and accelerate a national research and development program to achieve the prevention and control of air pollution; (3) to provide technical and financial assistance to State and local governments in connection with the development and execution of their air pollution prevention and control programs; and (4) to encourage and assist the development and operation of regional air pollution prevention and control programs.” Fish farm officials said IRS rules on how to account for the capital costs of company construction projects done by the firm's employees are complex and costly. They said prior to a 1986 change in the tax code, indirect costs (e.g., telephone costs associated with the construction project) could be treated as a business expense and therefore could be deducted from that year's taxes. After 1986, IRS required that indirect costs be included as a capital expense; therefore, they could be deducted only over a long period of time. They said because of this change, the company's deductions decreased and taxable income increased, and they had to pay higher taxes. IRS officials said the requirement to capitalize indirect costs allocable to the production of self-constructed assets was established by statute rather than by IRS regulations. They said Congress enacted the uniform capitalization rules as a part of the Tax Reform Act of 1986 for two reasons. First, Congress wanted to provide a series of uniform rules of capitalization for construction contractors, manufacturers, and taxpayers that produce property for their own use. Second, Congress believed that allowing the immediate deduction of indirect costs (1) resulted in a mismatch of costs and the income produced by those expenses, (2) permitted an unwarranted deferral of federal income tax, and (3) resulted in differences in the tax treatment of costs between purchased and self- constructed assets. IRS officials said Congress clearly intended that 26 U.S.C. 263A would result in a decrease in the taxpayer's current deductions and a corresponding increase in taxable income. The issue that we focused on in this concern is IRS’ assertion that the requirement that taxpayers capitalize indirect costs of construction projects was established by statute. We believe that the tax code gave IRS no discretion as to how it could write its regulations with regard to the capitalization of indirect costs. According to 26 U.S.C. 263A(a), any “allocable costs” (defined as a property’s direct costs and a property’s “proper share” of indirect costs that are allocable to the property) must be capitalized. However, if the property “is inventory in the hands of the taxpayer,” the statute says that those costs must be included in inventory costs. We do not believe that IRS could have developed a less burdensome regulatory approach that would have met the requirements of the underlying statute. The tax code gave IRS no discretion in how it could draft the regulatory requirements at issue in this concern, and IRS’ regulations were consistent with (and specifically referenced) the statutory requirements. This section of the tax code does not contain a statement of purpose. Officials from Multiplex Company, Inc., said that the IRS-required nondiscrimination tests for 401(k) thrift savings plans are of questionable value because IRS lowered the amount of money that can be contributed to the plans, thereby making it less likely that higher income employees will dominate the plans. IRS officials said that the “IRS-required nondiscrimination test” that Multiplex Company, Inc. officials mentioned appears to refer to the actual deferral percentage test, which is required by section 401(k)(3) of the Internal Revenue Code. Similarly, they said that the limit on deferrals under a 401(k) plan was imposed by section 402(g) of the Internal Revenue Code. Therefore, they said it is incorrect to claim that the “IRS lowered the amount of money that can be contributed.” The issues that we focused on in this concern are IRS’ assertions that the “nondiscrimination tests” used to determine the actual deferral percentage for highly compensated employees and the amount of money that can be contributed to 401(k) plans are established by statute. “(I) The actual deferral percentage for the group of eligible highly compensated employees is not more than the actual deferral percentage of all other eligible employees multiplied by 1.25. “(II) The excess of the actual deferral percentage for the group of eligible highly compensated employees over that of all other eligible employees is not more than 2 percentage points, and the actual deferral percentage for the group of eligible highly compensated employees is not more than the actual deferral percentage of all other eligible employees multiplied by 2.” With regard to the amount that can be contributed and deferred each year, 26 U.S.C. 402(g)(1) states that “the elective deferrals of any individual for any taxable year shall be included in such individual's gross income to the extent the amount of such deferrals for the taxable year exceeds $7,000.”Also, 26 U.S.C. 402(g)(5) states that “he Secretary shall adjust the $7,000 amount under paragraph (1) at the same time and in the same manner as under section 415(d); except that any increase under this paragraph which is not a multiple of $500 shall be rounded to the next lowest multiple of $500.” “(A) he actual deferral percentage for the group of eligible highly compensated employees is not more than the actual deferral percentage for the group of all other eligible employees multiplied by 1.25; or (B) he excess of the actual deferral percentage for the group of eligible highly compensated employees over the actual deferral percentage for the group of all other eligible employees is not more than two percentage points, and the actual deferral percentage for the group of eligible highly compensated employees is not more than the actual deferral percentage for the group of all other eligible employees multiplied by two.” The regulation is also similar to the statute with regard to the limits on the amount that can be contributed to the plans. For example, 26 C.F.R. 1.402(g)-1(d) states that “he applicable limit for an individual's taxable year beginning in the 1987 calendar year is $7,000. This amount is increased for the taxable year beginning in 1988 and subsequent calendar years in the same manner as the $90,000 amount is adjusted under section 415(d).” We do not believe that IRS could have developed a less burdensome regulatory approach that would have satisfied the underlying statutory requirements. The statute gave IRS no discretion in drafting the regulatory requirements at issue in this concern, and its regulations essentially mirror the language in the statute. This section of the tax code does not contain a statement of purpose. Multiplex Company, Inc. officials said that increased premium costs paid to PBGC to guarantee their employees’ pensions is costly for the company (over $2,600 in 1994). They said the mandated premium per participant increased from $2.60 in 1982 to $19.00 in 1994. PBGC officials said that the insurance premiums the agency charges are statutorily established in Section 4006 of the Employee Retirement Income Security Act (ERISA). The issue that we focused on in this concern is PBGC’s assertion that the increase in pension insurance premiums that Multiplex mentioned was statutorily driven. We believe that ERISA (codified at 29 U.S.C. 1001 et seq.) gave PBGC no discretion to set pension insurance premium rates below $19 per participant in 1994. The statute establishes specific premium rates for certain types of employer plans. For example, 29 U.S.C. 1306(a)(3)(A) states that the annual premium rate payable to PBGC in the case of a single-employer plan for basic benefits for plan years beginning after December 31, 1990, at “an amount equal to the sum of $19 plus the additional premium (if any) determined under subparagraph (E) for each individual who is a participant in such plan during the plan year.” The statute allows PBGC to raise the premium rate for particular plans under certain circumstances. However, the $19 rate is the minimum amount businesses with single-employer plans must pay for basic benefits. We believe that PBGC’s regulatory provisions concerning premium rates are within the authority granted by the statute. According to 29 C.F.R. 4006.3, “. . . the premium paid for basic benefits guaranteed under section 4022(a) of ERISA shall equal the flat-rate premium under paragraph (a) of this section plus, in the case of a single-employer plan, the variable-rate premium under paragraph (b) of this section.” In paragraph (a) the flat- rate premium is calculated as “. . . equal to the number of participants in the plan on the last day of the plan year preceding the premium payment year, multiplied by-- (1) $19 for a single-employer plan . . . .” We do not believe that PBGC could have developed less burdensome premium rates while still meeting the requirements of ERISA. The statute gave the agency no discretion in drafting the regulatory requirements at issue in this concern, and the regulations mirror the statutory requirements. “ he Congress finds that the growth in size, scope, and numbers of employee benefit plans in recent years has been rapid and substantial; that the operational scope and economic impact of such plans is increasingly interstate; that the continued well-being and security of millions of employees and their dependents are directly affected by these plans; that they are affected with a national public interest; that they have become an important factor affecting the stability of employment and the successful development of industrial relations; that they have become an important factor in commerce; . . . that owing to the lack of employee information and adequate safeguards concerning their operation, it is desirable in the interests of employees and their beneficiaries, and to provide for the general welfare and the free flow of commerce, that disclosure be made and safeguards be provided with respect to the establishment, operation, and administration of such plans; that they substantially affect the revenues of the United States because they are afforded preferential ederal tax treatment; . . . and that it is therefore desirable in the interests of employees and their beneficiaries, for the protection of the revenue of the United States, and to provide for the free flow of commerce, that minimum standards be provided assuring the equitable character of such plans and their financial soundness.” “t is hereby declared to be the policy of this chapter to protect interstate commerce and the interests of participants in employee benefit plans and their beneficiaries, by requiring the disclosure and reporting to participants and beneficiaries of financial and other information with respect thereto, by establishing standards of conduct, responsibility, and obligation for fiduciaries of employee benefit plans, and by providing for appropriate remedies, sanctions, and ready access to the ederal courts.” “t is further declared to be the policy of this chapter to protect interstate commerce, the ederal taxing power, and the interests of participants in private pension plans and their beneficiaries by improving the equitable character and the soundness of such plans by requiring them to vest the accrued benefits of employees with significant periods of service, to meet minimum standards of funding, and by requiring plan termination insurance.” An official from Metro Machine Corporation said that OSHA should differentiate between corporate negligence and employee responsibility in assessing workplace safety. He said OSHA currently holds companies, not individual employees, accountable for violations caused by employee negligence or willful removal of company-installed safety devices. OSHA officials said that Section 5 of the Occupational Safety and Health Act of 1970 places specific responsibilities for workplace safety and health on both employers and employees. Although the act gives OSHA the authority to enforce safety and health standards and issue citations to employers for violations of the act, the officials said the act does not authorize OSHA to penalize individual employees for misconduct related to safety or health standards. They noted that in Atlantic & Gulf Stevedores v. OSHRC, 534 F.2d 541, 555 (3rd Cir., 1976), the Court found that the Occupational Safety and Health Act does not confer upon the Secretary of Labor the power to sanction employees who disregard safety standards because the act's enforcement scheme is directed only against employers. Therefore, OSHA officials said its enforcement policy of holding companies liable for safety and health violations is wholly consistent with the intent of the act. However, OSHA officials also noted that since the early 1980s OSHA's policy has been to excuse the employer from a violation when an OSHA compliance officer determines that employees are systematically refusing to comply with safety and health standards and rules. To be excused from the violation, they said the employer would have to demonstrate that (1) his or her employees had received appropriate training and the necessary equipment, (2) the employer had communicated and enforced the work rules designed to prevent employee misconduct, (3) the employees failed to observe work rules that led to the violation, and (4) the employer had taken reasonable steps to discover the violation. The issue that we focused on in this concern is OSHA’s assertion that the Occupational Safety and Health Act does not allow it to hold individual employees accountable for violations of health and safety rules. We believe that the Occupational Safety and Health Act (codified at 29 U.S.C. 651 et seq.) gave OSHA no discretion in how it could write its regulations holding companies responsible for health and safety violations. Several sections of the act specifically mention holding employers accountable for violations, but none of those sections say that employees should be held accountable. For example, 29 U.S.C. 658(a) says that “f, upon inspection or investigation, the Secretary . . . believes that an employer has violated a requirement of section 654 of this title, of any standard, rule or order promulgated pursuant to section 655 of this title, or of any regulations prescribed pursuant to this chapter, he shall with reasonable promptness issue a citation to the employer.” Another section of the act (29 U.S.C. 659) says that “f, after an inspection or investigation, the Secretary issues a citation . . . he shall . . . notify the employer . . . of the penalty . . . .” The act goes on to say that “the citation and the assessment shall be deemed a final order . . . f the Secretary has reason to believe that an employer has failed to correct a violation for which a citation has been issued . . . .” “ he Area Director shall review the inspection report of the Compliance Safety and Health Officer. If, on the basis of the report the Area Director believes that the employer has violated a requirement of section 5 of the ct, of any standard, rule or order promulgated pursuant to section 6 of the ct, or of any substantive rule published in this chapter . . . he shall issue to the employer either a citation or a notice of de minimis violations. . . .” We do not believe that OSHA could have developed a less burdensome regulatory approach while still meeting the requirements of the Occupational Safety and Health Act. The statute gave OSHA no discretion in drafting the regulatory requirements at issue in this concern, and the agency’s regulations were consistent with the statutory requirements. As stated in 29 U.S.C. 651(b), the purpose of the Occupational Safety and Health Act is to ensure so far as possible every working man and woman in the nation safe and healthful working conditions and to preserve human resources. The statute delineates 13 actions intended to achieve this goal, including (1) encouraging employers and employees in their efforts to reduce the number of occupational safety and health hazards at their places of employment, and to stimulate employers and employees to institute new and perfect existing programs for providing safe and healthful working conditions; (2) providing that employers and employees have separate but dependent responsibilities and rights with respect to achieving safe and healthful working conditions; and (3) authorizing the Secretary of Labor to set mandatory occupational safety and health standards applicable to businesses affecting interstate commerce. An official from Bank A said that the regulation on the Availability of Funds and Collection of Checks (Regulation CC) requires the development and maintenance of expensive and time-consuming information on the current availability of funds. The official said that to provide this information to clients as the regulation requires, the bank must regularly review, update, and reprint brochures with this information. Officials at FRB said that Regulation CC implements the Expedited Funds Availability Act (12 U.S.C. 4001-4010), which limits the length of time depository institutions may place holds on deposits to transaction accounts. They said the act and the regulation also require depository institutions to provide to their customers written copies of their availability policies and written notices when certain types of extended holds are placed on deposits. In addition to providing general policy disclosure notices to customers, depository institutions also incur the ongoing costs of providing exceptions to hold notices and change-in-policy notices, as well as costs related to employee training. FRB officials said that because the disclosure provisions in Regulation CC are required by the Expedited Funds Availability Act, statutory amendments would be necessary to relieve any of the burdens on depository institutions associated with those provisions. The issue that we focused on in this concern is FRB’s assertion that the Expedited Funds Availability Act requires banks to maintain and disclose specific information about their funds availability policies. We believe that the Expedited Funds Availability Act gave FRB no discretion in how it could write its regulations requiring depository institutions to disclose their funds availability policies. The statute (12 U.S.C. 4004) requires depository institutions to disclose to their customers, on preprinted deposit slips, their policies regarding the withdrawal of deposits. The statute also requires these disclosures to be provided before an account is opened, whenever there is a policy change within the institution, if the customer requests a copy of the policy, and when deposits are accepted at automated teller machines. We believe that Regulation CC’s provisions requiring the disclosure of bank policies on funds availability (codified at 12 C.F.R. Part 229, Subpart B) are within the authority granted by the Expedited Funds Availability Act. The regulation’s requirements essentially repeat the requirements in the statute. For example, 12 C.F.R. 229.17 states that before an account is opened, a bank shall provide a potential customer with its funds availability policy. Section 229.18 states that disclosure notices shall be on all preprinted deposit slips and posted at all locations where the bank accepts deposits, including automated teller machines. The regulation also requires disclosure information to be provided upon customer request and sent to customers at least 30 days before a change in the bank's policy on funds availability is implemented. We do not believe that FRB could have developed a less burdensome regulatory approach that would have satisfied the requirements of the Expedited Funds Availability Act. The act gave the agency no discretion in drafting the regulatory requirements at issue in this concern, and those requirements essentially repeat the requirements in the act. Neither the Expedited Funds Availability Act nor in the Competitive Equality Banking Act, of which this act was a part, contain a statement of purpose. Bank B officials said that Regulation DD, which implements the Truth in Savings Act, should be simplified by reducing the number of times that banks are required to disclose transaction information. Officials at FRB said that the Truth in Savings Act requires institutions to provide information about rates paid and fees charged for consumer deposit accounts (a) upon request, (b) before an account is opened, (c) before terms previously disclosed are adversely changed, (d) if periodic statements are sent, and (e) before automatically renewable ("rollover") time accounts mature. They also said that promoting certain account terms in advertisements triggers the duty to disclose additional account terms. In adopting Regulation DD, FRB officials said the agency sought to facilitate compliance with the disclosure requirements in several respects. For example, they said change-in-term notices are not required when institutions lower rates for variable-rate accounts or for changes in check printing charges, which are often under the control of third-party vendors. Similarly, information regularly provided to consumers about their certificates of deposit or passbook savings accounts does not trigger the periodic statement disclosure requirements. Finally, although institutions are required to provide account-opening disclosures to all maturing rollover certificates of deposit, Regulation DD provides flexibility in the timing and content of these disclosures. However, because the number and timing of these disclosure provisions of Regulation DD are required by the Truth in Savings Act, the officials said that statutory amendments would be needed to further relieve the burdens associated with those provisions. The issue that we focused on in this concern is FRB’s assertion that the Truth in Savings Act requires banks to disclose information about interest rates and fees to their customers repeatedly. We believe that the Truth in Savings Act (codified at 12 U.S.C. 4301 et seq.) gave FRB no discretion in drafting Regulation DD’s requirements for repeated disclosures of depository institutions’ terms and conditions. Various provisions in the act require disclosures at various points in time. For example, 12 U.S.C. 4302(a) requires, with certain exceptions, that each institution disclose such information as annual percentage yields and minimum account balances. The institution must also provide a statement that an interest penalty is required for early withdrawal in conjunction with each advertisement, announcement, or solicitation that includes a reference to a specific rate of interest payable. According to 12 U.S.C. 4305(a), a schedule of fees, charges, interest rates, and terms and conditions applicable to each class of accounts offered by a depository institution must be (1) made available to any person upon request, (2) provided to any potential customer before an account is opened or a service is rendered, and (3) provided to depositors at least 30 days before the date of maturity of any time deposits that are renewable at maturity without notice from the depositor. Before any change is made in any term or condition that is to be disclosed in the required schedule that may reduce the yield or adversely affect any account holder, 12 U.S.C. 4305(c) requires institutions to notify customers and provide them with a description of the change by mail at least 30 days before the change takes effect. According to 12 U.S.C. 4307, each depository institution must include on or with each periodic statement provided to each account holder a clear and conspicuous disclosure of the annual percentage yield earned, the amount of interest earned, the amount of any fees or charges imposed, and the number of days in the reporting period. We believe that the requirements in Regulation DD regarding repeated disclosures (codified at 12 C.F.R. Part 230) are within the authority granted to FRB by the Truth in Savings Act. Many of the regulatory requirements mirror the requirements in the statute. For example, according to 12 C.F.R. 230.4, depository institutions must provide account disclosures to a consumer (a) upon request; or (b) before an account is opened or a service is provided, whichever is earlier. According to 12 C.F.R. 230.5, institutions must give at least 30 calendar days’ advance notice to affected consumers of any change in a term required to be disclosed if the change may reduce the annual percentage yield or adversely affect the consumer. Also, institutions must provide disclosure for time accounts with maturity longer than 1 month that renew automatically. According to 12 C.F.R. 230.6, institutions must include disclosures in the periodic statements mailed or delivered to consumers. We do not believe that FRB could have developed a less burdensome regulatory approach that would have satisfied the requirements of the Truth in Savings Act. The act gave the agency no discretion in drafting the regulatory requirements at issue in this concern, and Regulation DD’s requirements were consistent with the statutory requirements. According to 12 U.S.C. 4301(b), the purpose of the Truth in Savings Act is “to require the clear and uniform disclosure of (1) the rates of interest which are payable on deposit accounts by depository institutions; and (2) the fees that are assessable against deposit accounts, so that consumers can make a meaningful comparison between the competing claims of depository institutions with regard to deposit accounts.” Bank B officials said that Regulation Z (which implements the Truth in Lending Act) requires the bank to disclose the same information regarding bank practices (e.g., interest rates and loan terms) several times during a single transaction (e.g., when taking out a loan or opening an account). They recommended that Regulation Z be simplified to permit banks to disclose information only once during the transaction, or to give them the latitude to ask customers how often they need the disclosure information during a transaction. Officials at FRB said that the Truth in Lending Act and Regulation Z require creditors to provide increasing levels of detail about the potential cost of a transaction as the consumer progresses through the credit- shopping process. For example, promoting certain terms in advertisements triggers the duty to state additional credit terms; but these disclosures are limited to key terms, such as annual fees for a credit card plan or repayment terms for an installment loan. When consumers apply for a line of credit or certain variable-rate loans secured by their homes, general disclosures about the loan terms are provided that assist consumers in deciding whether to obtain the credit. Disclosures can also be required during the term of a loan, such as when the lender implements an adverse change to previously disclosed account terms in a revolving credit line or other "open-end" credit plan. Transaction-specific disclosures are given before the consumer becomes obligated for the credit. FRB officials also said that the timing of the disclosures is mandated by the Truth in Lending Act itself and not by Regulation Z. Amendments to the Truth in Lending Act would be required for changes in when and how often a lender must provide most of these disclosures. The issue that we focused on in this concern is FRB’s assertion that the Truth in Lending Act establishes the frequency with which banks must disclose certain types of information to customers. We believe the Truth in Lending Act (codified at 15 U.S.C. 1601 et seq.) gave FRB no discretion in drafting the regulatory requirements governing when banks are required to make certain disclosures. The act’s requirements in this area are very specific. For example, 15 U.S.C. 1637(a) states that before opening an account under an open-end consumer credit plan, the creditor must disclose to the person getting the credit such items as the conditions under which a finance charge may be imposed and the method for determining the balance upon which to impose the finance charge. Also, 15 U.S.C. 1637(b) states that at the end of each billing cycle for an open-end consumer credit plan for which there is an outstanding balance in that account or with respect to which a finance charge is imposed, the creditor must transmit a statement containing several specific items (as applicable). For example, the statute says the statement should contain the outstanding balance in the account at the beginning and end of the period and the total amount credited to the account during the period. Finally, according to 15 U.S.C. 1637(c), certain information must be disclosed on an application for a credit card or charge card. For example, the application must disclose the annual percentage rates, annual and other fees, any grace period, and method by which the credit balance is calculated. We believe that the requirements in Regulation Z are within the authority granted by the Truth in Lending Act. The regulatory requirements closely parallel the requirements in the statute. For example, 12 C.F.R. 226.5(b)(1) and (2) state that for open-end credit, the creditor must furnish initial disclosures before the first transaction is made under the plan and periodically provide a statement for each billing cycle at the end of which an account has a debit or credit balance of more than $1 or on which a finance charge has been imposed. Section 226.5a of the regulation says that the credit and charge card issuer must provide the disclosures specified on or with a solicitation or an application to open a credit or charge card account. We do not believe that FRB could have developed a less burdensome regulatory approach that would have met the requirements of the Truth in Lending Act. The act gave the agency no discretion in drafting the regulatory requirements at issue in this concern, and the agency’s regulatory requirements were consistent with the statutory requirements. The Truth in Lending Act is a subchapter within the Consumer Credit Protection Act. The subchapter (15 U.S.C. 1601(a)) says that the purpose of the Truth in Lending Act is to ensure a meaningful disclosure of credit terms so that the consumer will be able to compare more readily the various credit terms available to him and avoid the uninformed use of credit and to protect the consumer against inaccurate and unfair credit billing and credit card practices. A Bank C official said that Regulation DD reduces the bank's flexibility in providing services to customers. The official said that the bank cannot customize accounts for customers, put customers on analyzed accounts,or do bonus programs because of the expensive and complex computer system changes that would be needed to comply with the regulation. Officials of FRB said the agency made a concerted effort during the development of Regulation DD to provide flexibility to institutions in order to minimize compliance costs and maximize the development of new products. However, the Truth in Savings Act requires disclosure of the fees that may be assessed against a consumer's account. The officials said if an institution chooses to offer different fees or other terms to different consumers, the disclosures must reflect the terms agreed to by the parties. The issue that we focused on in this concern is FRB’s assertion that the Truth in Savings Act requires the disclosure of potential fees and other terms, which may have the effect of reducing a bank’s flexibility in providing services to its customers. We believe the Truth in Savings Act gave FRB no discretion in how it could draft Regulation DD’s disclosure requirements. According to 12 U.S.C. 4303(a) through (c), each institution must maintain a schedule of fees, charges, interest rates, and terms and conditions applicable to each class of accounts offered by the institution. The statute specifies the items that must be on the schedule. For example, the statute says that the schedule must contain (1) descriptions and amounts of all fees and service charges and the conditions under which those fees would be applicable; (2) all minimum balance requirements that would affect fees, charges, and penalties; (3) any minimum amount required to open the account; and (4) information on interest rates, such as any annual rate of simple interest and the frequency with which the interest would be compounded and credited. Although the statute does not specifically address whether banks must maintain similar schedules of disclosures about customized and analyzed accounts or bonus programs, it appears that disclosures would be required for these accounts or programs under the general heading of “terms and conditions.” We believe the referenced provisions of Regulation DD are within the rulemaking authority granted by the Truth in Savings Act because they are similar to the requirements in the act. For example 12 C.F.R. 230.4(a) and (b) state that a financial institution must provide account disclosures to a consumer before an account is opened, before a service is provided, or upon request. The regulation also states that the disclosures shall include rate information, compounding and crediting information, balance information, fees, transaction limitations, features of time accounts, and bonuses. We do not believe that FRB could have developed a less burdensome regulatory approach that would have satisfied the requirements of the Truth in Savings Act. The act gave the agency no discretion in drafting the regulatory requirements at issue in this concern, and the agency’s regulatory requirements were consistent with the statutory requirements. According to 12 U.S.C. 4301(b), the purpose of the Truth in Savings Act is “to require the clear and uniform disclosure of (1) the rates of interest which are payable on deposit accounts by depository institutions; and (2) the fees that are assessable against deposit accounts, so that consumers can make a meaningful comparison between the competing claims of depository institutions with regard to deposit accounts.” A Bank C official said that Regulation DD requires as part of its “redisclosure” rules that the bank provide customers with a written description of all the bank's services and fees each time the customer opens, changes, or reopens an account--even if the customer had previously received the same information. Officials at FRB said that the Truth in Savings Act requires financial institutions to provide complete account disclosures when an account is opened, and it also requires institutions to provide consumers with a notice of any change in terms. They said disclosures are required if an account is "re-opened" only if the institution deemed the account closed at some point in time. The issue that we focused on in this concern is FRB’s assertion that the Truth in Savings Act establishes when a bank must disclose information on the terms of their accounts to customers. We believe that the Truth in Savings Act gave FRB no discretion in how it could draft Regulation DD regarding disclosures when an account is opened or when there are changes to the account. According to 12 U.S.C. 4305(a)(2), institutions are required to provide complete account disclosures when an account is opened or when a service is rendered. According to 12 U.S.C. 4305(c), all account holders who may be affected by changes in terms or conditions or adversely affected by changes must be notified and provided with a description of the changes by mail at least 30 days before the changes take effect. We believe that FRB’s disclosure requirements in Regulation DD are within the authority granted by the Truth in Savings Act. The regulatory requirements parallel the statutory requirements in many respects. For example, according to 12 C.F.R. 230.4(a), a depository institution must provide account disclosures to a consumer before an account is opened or a service is rendered, whichever is earlier. The regulation states that an institution is considered to have provided a service when a fee required to be disclosed is assessed. Also, 12 C.F.R. 230.5(a) states that institutions are to provide consumers with advance notice of any change in terms if the change may reduce the annual percentage yield or adversely affect the consumer. No notice is required for variable rate changes, check printing fees, or short-term time accounts. The notice of change shall include the effective date of the change and shall be mailed or delivered at least 30 calendar days before the effective date of the change. We do not believe that FRB could have developed a less burdensome regulatory approach that would have satisfied the requirements of the Truth in Savings Act. The act gave the agency no discretion in drafting the regulatory requirements at issue in this concern, and those requirements are consistent with the statutory requirements. According to 12 U.S.C. 4301(b), the purpose of the Truth in Savings Act is to require the clear and uniform disclosure of the rates of interest and the fees that can be assessed against deposit accounts so that consumers can make a meaningful comparison between the competing claims of depository institutions with regard to deposit accounts. A Bank C official said that regulations requiring federally insured institutions to require flood insurance for properties located in floodplains are not applicable to nonbanking organizations such as the Money Store, where the public can apply for loans without having to acquire flood insurance. Officials at FRB and FDIC said the Flood Disaster Protection Act of 1973 created a significant disparity between the treatment of mortgage companies or other nondepository lenders and depository institutions with respect to flood insurance purchase requirements. The act also directed federal banking agencies to adopt regulations applicable to depository institutions to require the purchase of flood insurance for any improved property used to secure a loan if the property was located in a flood hazard area. No similar requirements were placed on mortgage banks. The issue that we focused on in this concern is FDIC’s assertion that the Flood Disaster Protection Act created the disparity between depository and nondepository institutions with respect to flood insurance requirements. We believe the Flood Disaster Protection Act gave FRB and FDIC no discretion in writing their regulations in this area. The statute (particularly 42 U.S.C. 4121(a)(13) and 42 U.S.C. 4012a(b)(1)) requires that regulated lending institutions not make real estate loans in an area having special flood hazards unless the building or property is covered by flood insurance. Also, 42 U.S.C. 4012a(b)(1) requires regulated lending institutions not to make, increase, extend, or renew any loan secured by improved real estate or a mobile home located, or to be located, in an area having special flood hazards and in which flood insurance has been made available under the National Flood Insurance Act of 1968,unless the building or mobile home and any personal property securing the loan is covered for the term of the loan by flood insurance. A “regulated lending institution” is defined in 42 U.S.C. 4121(a)(13) in such a way that nondepository lenders such as the Money Store would not be subject to the requirements of 42 U.S.C. 4012a(b)(1). We believe that FRB’s and FDIC’s regulatory requirements regarding flood insurance are within the authority granted by the Flood Disaster Protection Act of 1973 because those requirements are, in essence, the same as the statutory requirements. According to 12 C.F.R. 208.23(c), a state member bank may not make, increase, extend, or renew any designated loan unless the building securing the loan is covered by flood insurance for the term of the loan. We do not believe that FRB and FDIC could have developed a less burdensome regulatory approach while still meeting the underlying statutory requirements of the Flood Disaster Protection Act of 1973. The statute gave the agencies no discretion in drafting the regulatory requirements at issue in this concern, and those requirements were consistent with the statutory requirements. According to 42 U.S.C. 4002(b), the purpose of the Flood Disaster Protection Act of 1973 is to (1) substantially increase the limits of coverage authorized under the national flood insurance program; (2) provide for the expeditious identification of, and the dissemination of information concerning, flood-prone areas; (3) require states or local communities, as a condition of future federal financial assistance, to participate in the flood insurance program and to adopt adequate flood plan ordinances with effective enforcement provisions consistent with federal standards to reduce or avoid future flood losses; and (4) require the purchase of flood insurance by property owners who are being assisted by federal programs or by federally supervised, regulated, or insured agencies or institutions in the acquisition or improvement of land or facilities located, or to be located, in identified areas having special flood hazards. One of the objectives of our review was to determine, for each of 27 company concerns, the amount of discretion the underlying statutes gave rulemaking agencies in drafting the regulatory requirements that the agencies said were attributable to the underlying statutes. The agencies that issued those requirements indicated in two of our 1996 reports that the concerns could, at least in part, be traced to statutory requirements underlying their regulations. In this review we concluded that the statutory provisions underlying 12 of the 27 concerns gave the rulemaking agencies some discretion in how the related regulatory requirements could be drafted. We coded statutory provisions as allowing agencies “some discretion” if they delineated certain requirements regarding how the agencies’ regulations could be drafted but gave the agencies at least some flexibility regarding other requirements. This appendix provides our detailed analysis of each of these 12 company concerns. Specifically, for each such concern it provides the following information: (1) the portion of the concern in our 1996 reports that the agency or agencies indicated was statutorily based, (2) the portion of the agency response in our 1996 reports that indicated the concern was statutorily based, (3) our analysis of the amount of rulemaking discretion the relevant statutory provisions gave the agencies (the first objective of this review), (4) our analysis of whether the regulatory requirements at issue in the concern were within the authority granted by the underlying statutes (the second objective of our review), (5) our analysis of whether the rulemaking agencies could have developed regulatory approaches that would have been less burdensome to the regulated entities while meeting the underlying statutory requirements (the third objective of our review), and (6) the main purpose of the underlying statutes (where such purpose statements were available). Appendix I of this report contains a detailed discussion of our scope and methodology. Officials from the paper company said DOT’s required hazardous materials (hazmat) training is expensive. Under the regulations that took effect in January 1994, they said employees who deal with hazardous materials must be trained and tested, and this training costs the company $475,000 per year. DOT officials said the Hazardous Materials Transportation Uniform Safety Act, implemented in 1990, specifically required the issuance of regulations requiring that hazmat employers provide training to their hazmat employees.They said DOT’s Hazardous Materials Regulations were revised May 15, 1992, to reflect those statutory requirements. The issue that we focused on in this concern is DOT’s assertion that the Hazardous Materials Transportation Safety Act requires employers to provide certain employees with hazmat training. We believe that the Hazardous Materials Transportation Uniform Safety Act (now codified at 49 U.S.C. 5101 et seq.) gave DOT some discretion regarding how its regulations on hazmat training could be drafted. The statute said that the Secretary of Transportation “shall prescribe by regulation requirements for training that a hazmat employer must give hazmat employees of the employer on the safe loading, unloading, handling, storing, and transporting of hazardous material. . .” The statute also said the regulation must establish the date by which the training shall be completed, and to require employers to certify that their hazardous materials employees have received training and been tested on at least one of nine specific areas of responsibility that are delineated in the statute. Therefore, DOT had no discretion regarding whether to issue regulations requiring hazmat training or how it could draft those regulations with regard to the provisions described in the statute. However, we believe that DOT had some discretion in how it could draft other regulatory requirements. For example, the statute said that DOT’s regulations “may provide for different training for different classes or categories of hazardous material and hazmat employees.” It also said that the Secretary of Transportation “may require by regulation” documentation to support employers’ training certifications. We believe that DOT’s regulatory provisions requiring hazardous material training (49 C.F.R. 172.700-172.704) are within the authority granted by the Hazardous Materials Transportation Uniform Safety Act. DOT’s regulations contain the requirements that were specifically delineated in the statute. For example, the statute required the regulation to establish the date by which the hazardous materials training shall be completed, and the regulation (49 C.F.R. 172.704 (c)(ii)) says that employees must complete the training within 90 days after beginning employment or a change in job function. In other areas, the regulatory provisions appear to fall within the discretion afforded DOT by the statute. For example, the statute said that the Secretary of Transportation “may require by regulation” documentation to support employers’ training certifications, and the regulation (49 C.F.R. 172.704 (d)) requires such documentation. We could not determine whether DOT could have developed an alternative approach to hazmat training that would have been less burdensome to regulated entities while still accomplishing the requirements of the Hazardous Materials Transportation Uniform Safety Act. To make that determination we would have had to conduct a detailed examination of DOT’s training requirements that were not statutorily mandated and determine whether the Department could have eliminated them or used an alternative approach that regulated entities would have perceived as less burdensome. For example, we would have had to examine DOT’s regulatory requirement that employers provide documentation to support training certifications and determine whether DOT could have eliminated or amended that requirement and still met the requirements of the underlying statute. Such an examination of each nonstatutory requirement would have demanded extensive time and resource commitments that were beyond the scope of this assignment. According to 49 U.S.C. 5101, “he purpose of this chapter is to provide adequate protection against the risks to life and property inherent in the transportation of hazardous material in commerce by improving the regulatory and enforcement authority of the Secretary of Transportation.” According to hospital officials, it is very difficult to keep pace with frequently changing Medicare and Medicaid billing rules. Although the hospital’s computer programmers have spent many hours trying to keep their automated patient billing system up to date, the hospital officials said it is like “chasing a moving target.” According to HCFA officials, in a number of situations, the changes to hospital billing procedures are due to enhancements or changes made by Congress. The issue that we focused on in this concern is HCFA’s assertion that changes to Medicare and Medicaid billing rules are, at times, congressionally driven. HCFA officials said that the general mechanisms the government uses to pay for medical services are spelled out in the Code of Federal Regulations but are operationalized through the billing instructions published in numerous HCFA manuals. Although these billing rules do not appear in the Code of Federal Regulations and therefore do not have the force and effect of law, we considered them to be “regulatory requirements” in this report. HCFA’s Medicare Hospital Manual contains billing procedures that hospitals must follow when submitting bills to “fiscal intermediaries” (insurance companies with which HCFA contracts for hospital bill processing and payment). Although other HCFA publications may indirectly affect the billing procedures at issue in the hospital’s concern (e.g., changes to HCFA’s Medicare Part A Intermediary Manual that describe the procedures that intermediaries must follow when processing bills from hospitals), we focused our review on the changes to the billing procedures in HCFA’s Medicare Hospital Manual. “ he Secretary shall periodically determine the amount which should be paid under this part to each provider of services . . . except that no such payments shall be made to any provider unless it has furnished such information as the Secretary may request in order to determine the amounts due such provider under this part for the period with respect to which the amounts are being paid or any prior period.” (Emphasis added.) ” “ayment under this subsection for surgical dressings . . . shall be made in a lump sum amount for the purchase of the item in an amount equal to 80 percent of the lesser of (A) the actual charge for the item; or (B) a payment amount determined in accordance with the methodology described in subparagraphs (B) and (C) of subsection (a)(2) (except that in applying such methodology, the national limited payment amount referred to in such subparagraphs shall be initially computed based on local payment amounts using average reasonable charges for the 12-month period ending December 31, 1992, increased by the covered item updates described in such subsection for 1993 and 1994.” Because the statute specified that payment must be made for surgical dressings, the amount of those payments, and how those charges should be paid, we concluded that HCFA did not have discretion with regard to making changes to its billing instructions in this area. The other eight changes to the Medicare Hospital Manual appeared to be clarifications and technical corrections to HCFA’s billing procedures, and HCFA appeared to rely on its general authority to require “such information as the Secretary may request” in making these changes. In these cases, we believe that HCFA had considerable discretion in deciding whether to make the changes. For example, two of the changes affected billing requirements for inpatient hospital stays. One of the changes added a new requirement that bills be submitted in the sequence in which services were furnished. The other change clarified the previous change, noting that if the new policy disadvantaged (i.e., raised the liability of) the hospital, the beneficiary, or a secondary insurer, the hospital should notify its intermediary to arrange reprocessing of all affected claims. We believe that the changes that HCFA made to the Medicare Hospital Manual’s billing procedures were within the authority granted by the underlying statutes. In those cases in which HCFA had no discretion to make statutorily directed changes, the changes were consistent with (and, in some cases, identical to) the statutory requirements. Therefore, we concluded that those changes were within the authority granted by the statutes. For example, in the above illustration involving surgical dressings, HCFA changed the Medicare Hospital Manual to provide instructions for billing and payment that mirrored the requirements in the new subsection of the Social Security Act. In those cases in which HCFA appeared to make the changes at its own initiative, the agency relied on its authority to “periodically determine the amount which should be paid” and to collect “such information as the Secretary may request.” The statute also authorizes the Secretary to prescribe “such regulations as may be necessary to carry out the administration of the insurance programs . . . .” We believe that the changes that HCFA initiated to the billing procedures were within the authority provided to the agency in the statute. We could not determine whether HCFA could have made changes to its Medicare Hospital Manual that would have been perceived as less burdensome to the hospitals while still meeting the requirements of the underlying statutes. To do so we would have had to initiate a separate review of each change for which HCFA had at least some rulemaking discretion and determine whether the agency needed to make the change and, if so, whether another approach would have been less burdensome. Those reviews would have required extensive time and resource commitments that were beyond the scope of this review. “ he insurance program for which entitlement is established by sections 426 and 426-1 of this title provides basic protection against the costs of hospital, related post-hospital, home health services, and hospice care in accordance with this part for (1) individuals who are age 65 or over and are eligible for retirement benefits under subchapter II of this chapter, (2) individuals under age 65 who have been entitled for not less than 24 months to benefits under subchapter II of this chapter, and (3) certain individuals who did not meet the conditions specified in either clause (1) or (2) but who are medically determined to have end stage renal disease.” Officials from the hospital said the annual Medicare cost report is extremely difficult to prepare. They said the report’s information requirements place a considerable recordkeeping burden on the hospital’s health care providers. For example, they said each housekeeping supervisor must spend 2 to 3 hours each month preparing the necessary paperwork that will feed into this annual report, and some staff members must devote all of their time to compiling the required information. HCFA officials said section 1886(f)(1) of the Social Security Act requires the Secretary of Health and Human Services to maintain a system of cost reporting for prospective payment system hospitals. They also said that under sections 1815(a) and 1861(v)(1)(A) of the act, providers of service participating in the Medicare program must submit annual information to achieve settlement of costs for health care services rendered to Medicare beneficiaries. The issue that we focused on in this concern is HCFA’s assertion that the Social Security Act requires the information in the annual Medicare cost report. We believe that the Social Security Act gave HCFA some discretion in how its regulations in this area could be drafted. The act contains several provisions that require the Department of Health and Human Services or HCFA to collect information from hospitals in order to determine the amount of reimbursements that hospitals are due for patient care. Therefore, the agency had no discretion in whether to require a system for reporting cost information by hospitals. However, the Social Security Act gave HCFA discretion in determining the specific information required in the reports. For example, section 1815(a) of the act (codified at 42 U.S.C. 1395g) states that the Secretary “shall periodically determine the amount which should be paid under this part to each provider of services with respect to the services furnished by it . . . .” This section goes on to say that “no such payments shall be made to any provider unless it has furnished such information as the Secretary may request (emphasis added) in order to determine the amounts due such provider….” Section 1861 of the act (codified at 42 U.S.C. 1395x) states that “ he reasonable cost of any services shall be the cost actually incurred . . . and shall be determined in accordance with regulations establishing the method or methods to be used, and the items to be included, in determining such costs . . . .” The section goes on to delineate certain factors the Secretary must take into account in prescribing the regulations, such as considering both direct and indirect costs and using principles generally applied by national organizations. We believe that HCFA’s regulatory provisions for cost reports (codified at 42 C.F.R. Parts 412 and 413) are within the authority granted by the Social Security Act. According to 42 C.F.R. 412.52, “ll hospitals participating in the prospective payment systems must meet the recordkeeping and cost reporting requirements of 413.20 and 413.24 of this chapter.” According to 42 C.F.R. 413.24(a), “roviders receiving payment on the basis of reimbursable cost must provide adequate cost data. This must be based on their financial and statistical records which must be capable of verification by qualified auditors.” Similarly, section 413.24(c) states that “dequate cost information must be obtained from the provider’s records to support payments made for services furnished to beneficiaries.” Other portions of 42 C.F.R. 413 delineate the periods covered by the reports and the frequency with which they must be submitted. All of these regulatory requirements appear to fall within the rulemaking authority granted to HCFA by the Social Security Act. We could not determine whether HCFA could have developed cost reporting requirements that would have been perceived as less burdensome to hospitals while still meeting the requirements of the Social Security Act. To do so we would have had to initiate a separate review of the Medicare cost reports and how HCFA uses the information that it collects—a review that would have required extensive time and resource commitments that were beyond the scope of this review. The Social Security Act does not contain a statement of purpose regarding the cost reporting requirements. However, section 1811 of the act (codified at 42 U.S.C. 1395c) states that the purpose of the hospital insurance program is to provide “basic protection against the costs of hospital, related post-hospital, home health services, and hospice care . . .” for covered individuals. An official from Bank A said the regulation on the Availability of Funds and Collection of Checks (Regulation CC) requires the development and maintenance of expensive and time-consuming information on the current availability of funds. Officials at FRB said that Regulation CC implements the Expedited Funds Availability Act (codified at 12 U.S.C. 4001-4010), which places limits on the length of time depository institutions may place holds on deposits to transaction accounts. To ensure compliance with the act and the regulation, they said a depository institution must have the capacity to assign and track the availability of each check it accepts for deposit. The costs of developing and maintaining such a system likely vary with the complexity of the depository institution’s availability policy. Because the availability provisions of Regulation CC are required by the act, FRB officials said that statutory amendments would be necessary in order to relieve any of the burdens on depository institutions associated with those provisions. The issue that we focused on in this concern is FRB’s assertion that the Expedited Funds Availability Act limits the amount of time that banks can hold deposits, thereby requiring banks to keep track of the availability of funds they accept for deposit. We believe that the Expedited Funds Availability Act gave FRB some discretion in how it could draft its regulations on funds availability. According to 12 U.S.C. 4002, depository institutions must make funds from different types of deposits available for withdrawal within specified periods of time ranging from 1 day to several days. For example, 12 U.S.C. 4002(b)(2) says that funds must be available for withdrawal not more than 5 business days after the deposit of a check drawn on a nonlocal bank. Therefore, FRB had no rulemaking discretion in establishing the maximum lengths of time banks can hold funds before making them available to the depositor. However, the statute gives the agency discretion to require shorter holds on funds than the maximums established in the act as long as that period of time is within the time in which a bank can reasonably expect to learn of nonpayment on most items for each category of check. Although the act established varying maximum hold periods for different types of deposits, there is no statutory (or regulatory) provision requiring banks to develop and maintain a system for tracking the availability of deposits. However, in practice, banks need to develop tracking systems to enable themselves to comply with the act and the relevant regulation. FRB officials said that the bank could avoid the need for such a system by providing immediate availability for all deposits. We believe that Regulation CC’s requirements for expedited funds availability are within the authority granted the agency by the Expedited Funds Availability Act. Subsections 10, 12, and 13 of 12 C.F.R. 229 require that funds be available for withdrawal not later than specified periods of time ranging from 1 to several days. The time periods established in the regulations are consistent with the time periods in the statute for the different types of deposits. As discussed earlier, FRB had some discretion to write regulations requiring banks to hold deposits for less than the maximum time allowed in the statute. Therefore, FRB could have standardized the hold periods at less than the maximum period. Standardizing the hold periods could have been perceived as less burdensome to banks because it would have eliminated the need for banks to have tracking systems for different categories of deposits (e.g., deposits of local checks, government checks, or out-of-state checks). FRB’s discretion, however, is limited in that FRB can shorten a hold period only if banks would have a reasonable period of time to learn of the return of checks subject to the shorter hold. Standardization of hold periods across all categories of checks would require that all hold periods be set to the minimum period established by the act (1 day). In today’s check system, one day would not allow banks to learn of the return of most dishonored checks. Therefore, FRB does not appear to have the discretion to standardize the hold period for all categories of checks. Even if the hold periods were standardized for only some categories of checks, imposing shorter hold periods could also increase a bank’s risk of fraud on those checks. Banks may not have viewed such an approach as less burdensome. Ultimately, we could not determine whether FRB could have developed less burdensome regulatory approaches because to do so would have required extensive time and resource commitments (e.g., surveying the banks on whether standardized minimums would have been less burdensome) that were beyond the scope of this review. The Expedited Funds Availability Act does not contain a statement of purpose. An official from Bank A said the Truth in Savings Act’s (Regulation DD) requirements have not provided substantive benefits to either the bank or its customers. The official also said that before the act was passed, the bank provided savings account information to customers in several different documents. However, under the act this information must be consolidated into one document. Officials at FRB said Congress enacted the Truth in Savings Act in 1991 to enhance consumer shopping among deposit accounts. Its purpose is to require all depository institutions to disclose information about the rates paid and fees charged in a uniform manner. They said FRB’s Regulation DD requires institutions to disclose terms in a uniform way but allows flexibility in the format of the disclosures. For example, disclosures may be provided in a single document or in several documents, and they may be combined with other contractual provisions or disclosures required by federal or state law. The issue we focused on in this concern is FRB’s assertion that the Truth in Savings Act requires banks to provide disclosure statements to customers in a uniform manner. We believe that the disclosure requirements in the Truth in Savings Act (codified at 12 U.S.C 4301 et seq.) gave FRB some discretion regarding how its regulations in this area could be drafted. Although the statute gave the agency no discretion regarding much of the information that had to be disclosed about customers’ savings accounts, we believe the agency had some discretion with regard to certain types of information and the format of the disclosures. The Truth in Savings Act’s disclosure requirements were intended to allow consumers to make meaningful comparisons between the competing claims of depository institutions with regard to deposit accounts. The act describes in great detail the specific elements that such institutions must disclose to their customers. For example, 12 U.S.C. 4303(a) states that each banking institution must maintain a schedule of fees, charges, interest rates, and terms and conditions applicable to each class of accounts offered by the institution. According to 12 U.S.C. 4303(b), the schedule for any account must contain (1) a description of all fees, periodic service charges, and penalties that may be charged or assessed against the account, the amount of any fees, charges, or penalties and the conditions under which any amount will be assessed; (2) all minimum balance requirements that affect fees, charges, and penalties, including a clear description of how the minimum balance is calculated; and (3) any minimum amount required with respect to the initial deposit in order to open the account. Section 4303(c) states that the information on interest rates in the schedules must include 10 specific items, including any annual percentage yield and the effective period of the annual yield, the annual rate of simple interest, the frequency with which interest will be compounded and credited, a clear description of the method used to determine the balance on which interest is paid, any minimum balance that must be maintained to earn the rates and obtain the yields disclosed and how such a minimum balance is calculated, and a description of any minimum time requirements to obtain the yield advertised. On the other hand, the Truth in Savings Act also gives FRB rulemaking discretion in some areas, particularly with regard to certain types of information and accounts and in the format of the required disclosures. For example, 12 U.S.C. 4303(d) states that the schedule required under subsection (a) “shall include such other disclosures as the Board may determine to be necessary (emphasis added) to allow consumers to understand and compare accounts . . . .” Section 4304 of title 12 permits FRB to make such modifications “as may be necessary” in the disclosure requirements relating to annual percentage yield for certain types of accounts. Section 4303(e) states that the schedules required in section 4303(a) must be “presented in a format designed to allow consumers to readily understand the terms of the accounts offered,” but it does not specify the particular format that must be used. Section 4308 of title 12 requires FRB to issue regulations on the disclosure requirements and requires the agency to publish model forms and clauses to facilitate compliance. However, the section goes on to say that depository institutions are not required to use any such model form or clause, and the institutions must be considered in compliance with the disclosure requirements if they use an alternative format that “does not affect the substance, clarity, or meaningful sequence of the disclosure.” We believe that the disclosure provisions in Regulation DD (12 C.F.R. Part 230) are within the authority granted by the Truth in Savings Act. The regulation’s requirements regarding the content of account disclosures mirror, in many respects, the requirements in the statute. For example, 12 C.F.R. 230.4(b) states that account disclosures must (as applicable) include certain elements, including rate information (e.g., the annual percentage yield and the interest rate); balance information (e.g., minimum balance requirements); and the amount of any fees. All of these elements were required in the statute. Although Regulation DD also requires other disclosures that are not expressly listed in the statute (e.g., any limitations on the number or dollar amount of withdrawals or deposits), these requirements appear to fall within FRB’s authority in 12 U.S.C. 4303(d) to require such disclosures in the regulations “as the Board may determine to be necessary.” Regulation DD also states (12 C.F.R. 230.3) that depository institutions must make the required disclosures “clearly and conspicuously in writing and in a form the consumer may keep.” It also says that disclosures for each account “may be presented separately or combined with disclosures for the institution’s other accounts . . . .” Appendix B to Part 230 states that institutions may modify model disclosure clauses “as long as they do not delete required information or rearrange the format in a way that affects the substance or clarity of the disclosures.” Because the regulation gives discretion to depository institutions and mirrors the statute’s requirements in this area, we believe the regulation is within the authority granted by the statute. With regard to the elements that the Truth in Savings Act required institutions to disclose, we do not believe that FRB could have developed regulations that would have been less burdensome to financial institutions. For example, the act specifically required the disclosure of information on annual yields and interest rates, so Regulation DD had to contain those elements. However, we could not determine whether FRB could have refrained from requiring other information that is not expressly listed in the statute. To do so would have required us to determine if the information was, in fact, necessary to allow consumers to understand and compare accounts. Making that determination would have required an extensive analysis of consumer understanding and behavior that was beyond the scope of this review. With regard to the format of the disclosures, we believe that FRB could not have chosen a less burdensome approach than the one taken in Regulation DD. The agency did not require that banks disclose the information in a single document, and (as the statute required them to do) it allowed financial institutions to vary from the model clauses and sample forms as long as those variances did not affect the substance of the disclosures. According to 12 U.S.C. 4301(b), the purpose of the Truth in Savings Act is “to require the clear and uniform disclosure of (1) the rates of interest that are payable on deposit accounts by depository institutions; and (2) the fees that are assessable against deposit accounts, so that consumers can make a meaningful comparison between the competing claims of depository institutions with regard to deposit accounts.” A Bank C official said that Regulation DD requires that every fee charged to a customer's account must be separately described on the customer's statement. Officials at FRB said that the Truth in Savings Act requires institutions that provide periodic statements to consumers to disclose the annual percentage yield earned, any fees imposed, and certain other information on the statements. In adopting the final version of Regulation DD, the officials said that the Board considered concerns raised by commenters on the proposed regulation and implemented several changes to help minimize costs, particularly those associated with periodic statements. For example, information sent in connection with time accounts and passbook savings accounts is exempt from the periodic statement rules. “may contain such classifications, differentiations, or other provisions, and may provide for such adjustments and exceptions for any class of accounts as, in the judgement of the Board, are necessary or proper to carry out the purposes of this chapter, to prevent circumvention or evasion of the requirements of this chapter, or to facilitate compliance with the requirements of this chapter.” Therefore, the statute allowed FRB to write regulations excluding certain types of accounts from the fee disclosure requirements if the Board believed such exclusions were “necessary or proper.” We believe that this provision of Regulation DD is within the authority granted by the Truth in Savings Act because its requirements either mirror those in the act or are within the rulemaking discretion provided to the agency by the act. According to 12 C.F.R. 230.6, the periodic statement to consumers must include the annual percentage yield earned, the amount of interest, any fees imposed, and the length of the statement period. When FRB promulgated the final rule in September 1992, the agency said it had exercised its exception authority in the act to exclude specific types of accounts (i.e., time accounts and passbook savings accounts) from the periodic statement requirements of the final rule. FRB said it believed exempting these accounts from the disclosure requirements was appropriate because it would encourage institutions to continue providing certain information to customers. We could not determine whether FRB could have developed a less burdensome regulatory approach that would have satisfied the requirements of the Truth in Savings Act. To do so we would need to know whether it was “necessary or proper” for FRB to exclude other types of accounts from the periodic statement requirements. Making that determination would have required extensive time and resource commitments that were beyond the scope of this review. According to 12 U.S.C. 4301(b), the purpose of the Truth in Savings Act is “to require the clear and uniform disclosure of (1) the rates of interest which are payable on deposit accounts by depository institutions; and (2) the fees that are assessable against deposit accounts, so that consumers can make a meaningful comparison between the competing claims of depository institutions with regard to deposit accounts.” Bank A officials said bank regulators should require only reports that the regulators will use. They said it is very frustrating to spend the time and resources needed to complete required reports and not know if the regulators actually use them. Officials from FDIC said that some bank reporting requirements are specifically mandated by statute. However, they also recognized that some of these requirements might be unduly burdensome for banks compared to the value of the information to FDIC as it seeks to discharge its responsibilities as an insurer and bank supervisor. In such situations, the officials said that FDIC makes recommendations for legislative changes to eliminate burdensome reporting requirements. They also said banks are urged to express their opinions on specific reports they consider unused. The issue that we focused on in this concern is FDIC’s assertion that some bank reporting requirements are specifically required by statute. We did not examine whether the reports are actually used by bank regulators. We believe that FDIC had some discretion in developing regulations governing bank reporting requirements. Some of the relevant statutes did not give FDIC rulemaking discretion, but other statutes gave the agency at least some discretion to impose reporting requirements. FDIC officials said that as of March 31, 1998, the agency had 58 active “information collections” that had been approved by the Office of Management and Budget (OMB) under the Paperwork Reduction Act. Of these, the officials said that 54 were statutorily mandated. We reviewed the statutory provisions underlying many of the 54 information collections and concluded that some of those provisions gave FDIC no discretion in drafting its regulations. For example, 12 U.S.C. 1972(2)(G)(i) requires certain bank executive officers and principal shareholders to submit a written report to the board of directors for any year during which they have an outstanding extension of credit. The statute describes in detail the information required in this report. Therefore, FDIC had no discretion in drafting its regulations regarding whether the report was required, what information the report must contain, or the timing of the report. However, other statutory provisions gave FDIC considerable discretion in establishing reporting requirements. For example, 12 U.S.C. 1817(i) requires insurance of trust funds, and it permits FDIC’s Board of Directors to “prescribe such regulations as may be necessary (emphasis added) to clarify the insurance coverage under this subsection and to prescribe the manner of reporting and depositing such trust funds.” We reviewed the relevant regulatory provisions for many of the information collections that FDIC said were statutorily mandated and concluded that of those we reviewed, the regulatory provisions were within the authority granted by the statutes. In those cases in which the underlying statutes gave the agency discretion to develop regulations, any regulations in this area that the agency developed would be within the authority granted by the statute. For example, in the above illustration in which the statute gave FDIC the authority to prescribe the manner of reporting on trust funds, any regulations specifying the reporting requirements would be within the authority of the statute. In those cases that we reviewed in which the underlying statute gave the agency no discretion, the regulatory language closely mirrored the language in the corresponding statute or specifically referenced the statutory requirements. For example, in the above illustration concerning reports from banks’ executive officers and principal shareholders, the language in FDIC’s regulations is essentially the same as the language in 12 U.S.C. 1972(2)(G)(i). We could not determine whether FDIC could have eliminated or developed less burdensome regulatory approaches without doing an in-depth analysis of each of the agency’s nonstatutory reporting requirements and how FDIC uses the information collected. Such an analysis would have required extensive time and resource commitments that were beyond the scope of this assignment. The statutes underlying the 54 information collections that FDIC said were statutorily mandated often did not contain statements of purpose. A Bank C official said that the Real Estate Settlement Procedures Act (RESPA or Regulation X), which is administered by HUD, requires extensive disclosure documents that are not easily understood by customers or relevant to their concerns. For example, the bank official said the loan package for a no-fee, no-point home equity loan contains about 10 pages of federally required paperwork, only 2 pages of which (dealing with the settlement statement of the loan) directly affect and are of interest to the customer. The official said the other eight pages consist of forms that are of little concern to the customer, such as the Servicing Disclosure Statement and the Controlled Business Arrangement Disclosure. HUD officials said that Congress established the RESPA disclosure requirements (12 U.S.C. 2601 et seq.) to which the bank official referred. They said those requirements consist of a Good Faith Estimate of Settlement Costs; an information booklet delivered to all home purchasers; and, in the case of first lien loans, a disclosure of the lender’s mortgage servicing practices. In the event the lender is referring the borrower to one or more of its affiliated companies to provide settlement services, they said RESPA requires disclosure of their relationship and that the borrower has the option to choose other providers (except for appraisers, credit reporting agencies, and lender’s counsel). At settlement, they said the statute requires the settlement agent to provide the borrower with a standardized accounting of the transaction, familiarly known as the HUD-1 or HUD-1A. The issue that we focused on in this concern is HUD’s assertion that RESPA establishes the disclosure requirements that the bank official found burdensome. We believe that RESPA gave HUD some discretion in drafting regulations regarding the disclosure requirements at issue in this concern. The documents that lenders are required to disclose and many of the specific elements in those documents are explicitly required in the statute. For example, 12 U.S.C. 2605 says that the lender of a federally related mortgage loan must disclose to the applicant whether the servicing of the loan may be assigned, sold, or transferred to any other person at any time while the loan is outstanding. Also, 12 U.S.C. 2603(a) states that HUD must develop and prescribe a standard real estate settlement form (with regional variations, as necessary) for use in all transactions in the United States that involve federally related mortgaged loans. The statute also says that the form must clearly itemize all charges imposed upon the borrower and the seller in connection with the settlement; and it must indicate whether any title insurance premium included in such charges covers or insures the lender’s, borrower’s, or both parties’ interest in the property. Another provision of RESPA (12 U.S.C. 2604) requires HUD to prepare and distribute informational booklets and also requires lenders to provide a copy of the booklet to each person from whom they receive, or for whom they prepare, a written application to finance a mortgage for a residential property. The statute says that the booklet must be delivered or placed in the mail not later than 3 business days after the lender receives the application. Therefore, HUD had no discretion with regard to these requirements. However, other parts of RESPA gave HUD discretion in how it could write its regulations. For example, 12 U.S.C. 2604 says that the above-mentioned informational booklets “shall be in such form and detail as the Secretary may prescribe . . . .” Therefore, HUD had considerable discretion regarding the format in which the information had to be presented and had the authority to require additional information in the booklet beyond what was stipulated in the statute. We believe that the HUD regulatory provisions related to the above- mentioned RESPA disclosure requirements are within the authority granted to the agency by the statute. Several of these regulatory provisions paraphrase the wording in the associated statutory requirements. For example, the language in 24 C.F.R. 3500.6(a) and (1) mirrors the language in RESPA regarding the lender’s responsibility to provide the information booklet and when the booklet must be provided. Also, 24 C.F.R. 3500.7(a) mirrors the language in the statute regarding the lender’s responsibility to provide good faith estimates. Other regulatory provisions do not mirror the statutory language but appear to be substantively the same as the statutory provisions. For example, 24 C.F.R. 3500.21(b)(1) says that lenders must disclose to each person who applies for a loan whether the servicing of the loan may be assigned, sold, or transferred to any other person at any time while the loan is outstanding. Section 3500.8(a) of the regulation says that unless specifically exempted, the HUD-1 or HUD-1A settlement statement must be used in every settlement involving federally related mortgage loans in which there is a borrower. Section 3500.15(b) states that affiliated business arrangements are not violations of 12 U.S.C. 2607 as long as certain conditions are met. The conditions set out in the regulation are substantively the same as those in 12 U.S.C. 2607(c); affiliated business arrangements or agreements are not prohibited as long as (A) the person making the referral has provided a written disclosure on the existence of such an arrangement to the person referred, (B) such person is not required to use any particular provider of settlement services, and (C) the only thing of value that is received from the arrangement (other than payments, such as fees or salaries) is a return on the ownership interest or franchise relationship. We could not determine whether HUD could have developed less burdensome disclosure requirements while still meeting the underlying requirements of RESPA. To make that determination we would have had to conduct a detailed examination of HUD’s disclosure requirements that were not statutorily mandated and determine whether HUD could have eliminated them or used an alternative approach that regulated entities would have perceived as less burdensome. Such an examination of each nonstatutory requirement would have demanded extensive time and resource commitments that were beyond the scope of this assignment. According to 12 U.S.C. 2601, the purpose of RESPA is to (1) simplify and improve the disclosures applicable to the transactions under these acts including the timing of the disclosures; and (2) provide a single format for the disclosures that will satisfy the requirements of each of the acts with respect to the transactions. An official from Bank A said that FDIC requires banks to complete call reports, a quarterly statistical summary of bank operations, that are very detailed (28 pages for the bank) and require a significant amount of time for bank employees to complete. She said one employee spends 1 week during each quarter preparing the report. FDIC officials said that Section 7(a) of the Federal Deposit Insurance Act requires each FDIC-insured depository institution to submit quarterly “reports of condition,” also known as “call reports,” to the appropriate federal banking agency. The officials said that bank call reports generally consist of a balance sheet; an income statement; a statement of changes in equity capital; and supporting schedules that provide additional information on specific categories of assets and liabilities, off-balance sheet items, past due and nonaccrual assets, loan charge-offs and recoveries, and risk-based capital. FDIC officials said that the call report also includes the information used by FDIC to calculate each institution’s premiums for deposit insurance. An individual bank files one of four versions of the call report, depending upon whether it has foreign offices and on its size in total assets. The officials said the call report for banks with foreign offices is the most detailed, and the report for small banks is the least detailed. Officials from OCC and FRB also commented on this concern. They said that 12 U.S.C. 161(a) and 12 U.S.C. 324 require all banks to file call reports. They said call reports provide financial information for public disclosure, and regulators use them to evaluate the safety and soundness of the banking system. The officials said the reports enable them to make a proper assessment of a bank’s condition and are a critical element of the supervisory process. The issue that we focused on in this concern is the agencies’ assertion that the call reports at issue in the concern are statutorily required. We believe that the various statutes that require or authorize the banking agencies to collect information through the call reports gave the agencies some discretion in drafting the relevant regulatory provisions. Some of the provisions in the relevant statutes, such as in the Federal Deposit Insurance Act and the Federal Deposit Insurance Corporation Improvement Act of 1991, are very specific and therefore gave the banking agencies no discretion in how the regulatory provisions could be drafted. For example, 12 U.S.C. 1817(a) requires each institution to submit reports of conditions four times each year, stipulates that the reports include the total amount of the institution’s liability for deposits, and specifies that time and savings deposits and demand deposits be listed separately. Other provisions in the statutes gave the banking agencies at least some rulemaking discretion. For example, 12 U.S.C. 1817(a) requires financial institutions to submit call reports four times each year and specifies that two of those reports must be submitted between January and June and two between July and December. However, the statute allows the banking agencies to determine exactly when these reports must be filed. Also, 12 U.S.C. 161(a) says that financial institutions must file call reports with OCC in accordance with the Federal Deposit Insurance Act (codified at 12 U.S.C. 1811 et seq.). That act (12 U.S.C. 1817(a)(1)), in turn, says that certain banks (e.g., insured state nonmember banks) must submit those reports in such form and containing such information as FDIC may require. Therefore, although the relevant statutes require call reports to include some specific types of information, the statutes also give agencies the flexibility to require other information that they deem necessary and to specify the format of the reports. We believe that the regulatory provisions implementing these statutory requirements (codified at 12 C.F.R. 304.4(a)) are within the authority granted by the relevant statutes. The regulation (1) requires that the call reports be prepared in accordance with instructions from the Federal Financial Institutions Examination Council; (2) lists a number of items that must be reported in, or taken into account during the preparation of, the call reports; and (3) requires that the reports be submitted on March 31, June 30, September 30, and December 31 of each year. These provisions are consistent with specific requirements in the statute (e.g., that the reports be submitted four times each year) or fall within the general rulemaking authority granted by the statutes. We could not determine whether FDIC, OCC, and FRB could have developed less burdensome call report requirements while still meeting the requirements of the relevant statutes. In order to make such a determination, we would need to do a detailed review of each of the nonstatutory data elements of the call reports. This type of detailed analysis would require significant time and resource commitments that were beyond the scope of this review. The Federal Deposit Insurance Act and the Federal Deposit Insurance Corporation Improvement Act of 1991 do not contain statements of purpose. An official from Bank A said that the information requirements for the call report keep changing, which she said makes it difficult for the bank to plan ahead. Officials from OCC, FDIC, and FRB said the events that make call report changes necessary include changes in statutes, regulations, accounting rules, technology, and the nature of the business of banking. They said existing items are deleted when they are no longer considered sufficiently useful. The issue that we focused on in this concern is the agencies’ assertion that changes in the call report requirements are, at times, required by changes in statutes. We believe that the relevant statutes gave OCC, FDIC, and FRB some discretion in how frequently they have made changes to the regulations requiring information in the call reports. In general, the banking agencies are required to review and make changes to the information required in the call reports that they believe are necessary. According to 12 U.S.C. 4805(c), each federal banking agency must review the information required by the schedules supplementing the core information on the call reports and “eliminate requirements that are not warranted for reasons of safety and soundness or other public purposes.” However, to determine the extent to which specific changes to the reporting requirements were driven by statutory changes, we reviewed the annual Revisions to the Reports of Condition and Income for 1993, 1994, and 1995—the 3-year period immediately prior to our 1996 study. During that period, the agencies added, deleted, or modified a total of 280 items from the call report requirements. Of these 280 changes, OCC officials said that 44 (16 percent) were driven by statutory requirements in the Federal Deposit Insurance Corporation Improvement Act of 1991 (FDICIA). We reviewed the statutory provisions in FDICIA that OCC officials said mandated the 44 changes to the call report requirements to determine how much discretion the statute permitted the banking agencies to make those changes. We concluded that the agencies had some discretion in what they could require with regard to 37 of the 44 changes. The statute often required agencies to collect general types of information, but left to the agencies’ discretion the specific information that banks were required to submit and/or whether the information had to be in the call reports. For example, 1 of the changes that OCC officials said resulted in the addition of 30 items to the call reports in 1993 involved the collection of information on loans to small businesses and small farms. Section 122 of FDICIA states that the agency must annually collect information on small business and small farm lending in the call reports and provides suggestions of the type of information that may be collected. Therefore, the statute gave the banking agencies no discretion about collecting this information and specifically required that the call reports serve as the reporting mechanism. However, the statute gave the agencies discretion regarding the specific information that banks had to report. In the remaining 7 of the 44 changes, we believe the banking agencies had no discretion with regard to changing the call report requirements. For example, in one of the seven changes, the statute required the agencies to collect information on all assets and liabilities in all banking reports. Therefore, the information had to be collected and had to be in all banking reports, including the call reports. We believe the banking agencies’ actions to change the call reports’ requirements were within the authority granted by the statutes. All of the changes that the agencies made to the call report requirements appeared to be either specifically required by FDICIA or were within the discretion that the statutes gave the agencies to make such changes. We could not determine whether FDIC, OCC, or FRB could have avoided making changes to the call report requirements while still meeting the requirements of the underlying statutes. In order to make such a determination, we would need to do a detailed review of each change to the call reports that was not specifically required by statute. This type of detailed analysis would require significant time and resource commitments that were beyond the scope of this review. FDICIA does not contain a statement of purpose. The officials from Bank A and the glass company raised concerns about staying current with and understanding the changing and increasingly complex regulatory requirements related to ERISA. IRS officials said the companies’ concerns fail to distinguish between the complexity of and burden that results from the statutes governing retirement plans and the effect of regulations promulgated by IRS and the Department of the Treasury. The officials said the relevant statutes have been amended frequently since ERISA was enacted, and have become increasingly complex. They said the companies’ concerns about the complexity of the statutes governing retirement plans are properly addressed to Congress, not IRS or other administrative agencies. The issue that we focused on in this concern is IRS’ assertion that the complexity and frequent changes in ERISA regulations are traceable to changes in the statute. We believe that IRS had some discretion with regard to how ERISA regulations could be drafted and how frequently those regulatory requirements could be changed. To determine the extent to which specific changes to ERISA regulatory requirements were driven by statutory changes, we asked IRS to identify all of the changes it had made in the relevant regulations between January 1994 and December 1995—the 2-year period prior to the issuance of our December 1996 report. IRS officials identified 37 such regulatory changes, 24 of which were based on amendments made to ERISA by 4 different statutes. For example, the Tax Reform Act of 1986 amended ERISA provisions (codified at 26 U.S.C. 414(r)) relating to pension plans of employers operating separate lines of business. Subparagraph (2) of 26 U.S.C. 414(r) states that an employer should be treated as operating separate lines of business if, among other things, it meets “guidelines prescribed by the Secretary or the employer receives a determination from the Secretary that such line of business may be treated as separate.” Therefore, we believe that IRS had discretion in how it drafted regulations implementing these amendments. IRS made 15 changes to regulatory provisions in 26 C.F.R. 1.410 and 1.414 to interpret the separate line of business provisions. Also, statutory provisions in the Omnibus Budget Reconciliation Act of 1993 changed the compensation limit of an employee in a qualified trust and amended the formula for increasing the cost-of-living adjustment. The provision (codified at 26 U.S.C. 401(a)(17)) imposed a $150,000 limit on the amount of annual compensation of each employee in a qualified trust. It also required the Secretary of the Treasury to annually adjust the $150,000 limit for increases in the cost-of-living at the same time and in the same manner as adjustments are made pursuant to another subsection. However, the provision stipulated that a different base period be used and that any increase that is not a multiple of $10,000 must be rounded to the next lowest multiple of $10,000. We concluded that IRS had no discretion in how it drafted its implementing regulations for these provisions. IRS drafted a regulatory provision that reflects the statutorily prescribed adjustment process and the statutory change in the $150,000 limit. We believe that IRS’ ERISA regulations are within the authority granted by statute. The regulations did not appear to exceed the amount of rulemaking authority provided by the statutes. In each instance in which the relevant statutory provision gave IRS latitude in how its regulation could be written, the regulations appeared substantively consistent with the statutory intent and within the agency’s authority. In each instance in which the relevant statutory provision gave IRS no discretion in how its regulations could be changed, the regulations mirrored the statutory provisions. For example, the IRS regulation on the ERISA annual compensation limit (26 C.F.R. 1.401(a)(17)-1) states that the annual compensation limit is $150,000 and that the limit is adjusted for changes in the cost of living at the same time and in the same manner as in another subsection. The regulation also mirrors the statutory requirements regarding the base period to be used to calculate the annual adjustments and the rounding of adjustments to the nearest $10,000. We could not determine whether IRS could have developed less burdensome regulations that would have met the requirements of the underlying statutes. To do so we would have had to initiate a separate analysis of each provision for which IRS had rulemaking discretion— analyses that would have required extensive time and resource commitments that were beyond the scope of this assignment. Many of the statutes amending ERISA did not contain a statement of purpose. Officials from the fish farm said that pesticide manufacturers were either not renewing the aquatic use of certain pesticides or were not seeking EPA approval of the products for use in aquaculture because of the expense associated with EPA’s reregistration program. EPA officials said that the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) (codified at 7 U.S.C. 136 et seq.) requires EPA to determine that the use of pesticides does not cause unreasonable adverse effects on humans or the environment. In 1988, Congress required (FIFRA section 4, 7 U.S.C. 136a-1) EPA to certify that all pesticides meet current testing standards for safety, including products that were first approved many years ago. These older pesticides were originally approved when the data requirements were less stringent and the associated costs of testing for safety were substantially less than they are today. Since much of the data on older pesticides may not meet current standards, the cost of conducting studies to support approval for use today may be substantial. The issue that we focused on in this concern is EPA’s assertion that the cost associated with the requirement that pesticide manufacturers reregister pesticides is traceable to FIFRA. Although the requirement in question has a direct effect only on manufacturers of covered pesticides, the fish farm is affected secondarily by the decision of manufacturers to not seek reregistration of the pesticides because of the cost of reregistration. We believe that FIFRA gave EPA some discretion regarding the requirements that manufacturers must satisfy in the pesticide reregistration process. In some areas, EPA appears to have had little discretion. For example, 7 U.S.C. 136a-1 states that the Administrator of EPA must reregister “each registered pesticide containing any active ingredient contained in any pesticide first registered before November 1, 1984.” This section describes in some detail the approach EPA is to use in the reregistration process. For example, the statute requires the reregistration to be carried out in five separate phases and requires those seeking reregistration of a covered pesticide to submit a summary of each study the registrant considers adequate to meet the requirements of the statute, as well as the data underlying each such study. However, the statute gives the Administrator discretion regarding the specific data that manufacturers must submit. The statute (7 U.S.C. 136a-1(d)(3)) requires each registrant to submit all data required by regulations “issued by the Administrator under section 136a of this title . . . .” Section 136a requires the Administrator to publish guidelines specifying the kinds of information that will be required to support the registration of a pesticide. Although the statute provides general standards that the Administrator must consider when establishing data requirements for “minor uses” (e.g., considering “the impact of the cost of meeting the requirements on the incentives for any potential registrant to undertake the development of the required data”), the Administrator appears to have considerable discretion in establishing registration (and therefore reregistration) data requirements. These requirements can have a direct impact on the expense incurred by manufacturers in the reregistration process. An EPA official said there are no EPA regulations requiring reregistration of pesticides first registered before November 1, 1984. He said that the statute was so specific in delineating this requirement that they did not believe it was necessary to draft regulations that would mirror the statutory language. However, 40 C.F.R. Part 152 delineates the regulatory requirements for registration of pesticides under section 3 of FIFRA, including the data requirements that are referenced in the reregistration requirements of section 4 of the statute. Because FIFRA gave the EPA Administrator considerable discretion in establishing those data requirements, we concluded that the data requirements in the regulation pertinent to the reregistration process are within the authority granted by the statute. We could not determine whether EPA could have developed less burdensome data requirements that would have accomplished the underlying requirements of FIFRA. To do so, we would have had to examine each data requirement and determine whether the information was necessary to prevent unreasonable adverse effects on the environment. Conducting such an analysis would have required extensive time and resource commitments that were beyond the scope of this assignment. FIFRA does not contain a statement of purpose. One of the objectives of our review was to determine, for each of 27 company concerns, the amount of discretion the underlying statutes gave rulemaking agencies in drafting the regulatory requirements that the agencies said were attributable to the underlying statutes. The agencies that issued those requirements indicated in two of our 1996 reports that the concerns could, at least in part, be traced to statutory requirements underlying their regulations. In this review we concluded that the statutory provisions underlying 2 of the 27 concerns gave the rulemaking agencies broad discretion in how the related regulatory requirements could be drafted. We coded statutory provisions as allowing agencies “broad discretion” if the provisions contained few specific requirements or imposed few to no constraints on whether, and if so how, an agency’s regulations could be drafted. This appendix provides our detailed analysis of these two company concerns. Specifically, for each such concern it provides the following information: (1) the portion of the concern in our 1996 reports that the agency or agencies indicated was statutorily based, (2) the portion of the agency response in our 1996 reports that indicated the concern was statutorily based, (3) our analysis of the amount of rulemaking discretion the relevant statutory provisions gave the agencies (the first objective of our review), (4) our analysis of whether the regulatory requirements at issue in the concern were within the authority granted by the underlying statutes (the second objective of our review), (5) our analysis of whether the rulemaking agencies could have developed regulatory approaches that would have been less burdensome to the regulated entities while accomplishing the underlying statutory objectives (the third objective of our review), and (6) the main purpose of the underlying statutes (where such purpose statements were available). Appendix I of this report contains a detailed discussion of our scope and methodology. An official from Bank A said EEOC’s record retention standard is inconsistent with how EEOC pursues cases. He said EEOC requires the retention of personnel files for former employees for only 1 year after employees leave a company. The bank official said that if the bank had followed the EEOC guidelines and kept employees' files for only 1 year, it would have had a "major problem" on the several occasions when EEOC staff questioned bank officials about employees who had left several years ago. According to EEOC, the specific record retention standards in its regulations are tied to the periods in each statute during which discrimination complaints can be filed. For example, Title VII of the Civil Rights Act of 1964 and the Americans with Disabilities Act (ADA) recordkeeping regulations (29 C.F.R. 1602.14) require that personnel records be kept for 1 year because charges can be filed up to 300 days after the alleged discrimination. Similarly, the Age Discrimination in Employment Act (ADEA) requires that employers retain employment records for a period of 1 year from the effective date of the personnel actions to which they relate because ADEA charges can be filed up to 300 days after the alleged age discrimination. However, ADEA recordkeeping regulations (29 C.F.R. 1627.3) also require employers to keep basic payroll information for 3 years because the Commission can investigate suspected age discrimination based on an untimely charge or even absent a charge. Finally, Equal Pay Act lawsuits must be filed within either 2 or 3 years of the alleged discrimination, so the related regulations (29 C.F.R. 1620.32) contain 2 and 3 year record retention periods. EEOC officials said that under all of the statutes, when a claim of discrimination is pending, the employer is required to preserve all relevant personnel records until final disposition of the charge or action. If the bank has complied with these requirements, destruction of records in the normal course of business when there is no pending charge of discrimination would not violate the law or give rise to an adverse inference. The issue that we focused on in this concern is EEOC’s assertion that its record retention requirements are tied to the filing periods in various civil rights statutes. “every employer subject to any provision of this chapter or of any order issued under this chapter shall make, keep, and preserve such records of the persons employed by him and of the wages, hours, and other conditions and practices of employment maintained by him, and shall preserve such records for such periods of time, and shall make such reports therefrom to the Administrator as he shall prescribe by regulation or order as necessary or appropriate for the enforcement of the provisions of this chapter or the regulations or orders thereunder.” (Emphasis added.) “ny personnel or employment record made or kept by an employer...shall be preserved by the employer for a period of one year from the date of the making of the record or the personnel action involved, whichever occurs later. . . here a charge of discrimination has been filed, or an action brought by the Commission or the Attorney General, against an employer under title VII or the ADA, the respondent employer shall preserve all personnel records relevant to the charge or action until final disposition of the charge or the action.” Because the civil rights statutes do not specify how long employers must retain records, and because those statutes permit EEOC to require that records be kept for such periods as the Commission may prescribe, we believe that EEOC’s recordkeeping requirements fall within the broad discretion permitted in the statutes. We could not determine whether EEOC could have developed recordkeeping requirements that would have been less burdensome to regulated entities than those that it developed while still accomplishing the underlying statutory objectives. In a sense, EEOC’s recordkeeping requirements appear to be the least burdensome approach in that they closely relate to the filing periods in the antidiscrimination laws that EEOC cited. For example, under Title VII, ADA, and ADEA employees have up to 300 days to file a discrimination charge. The relevant record retention regulation states that records must be retained for 365 days. Filing periods under the Equal Pay Act range from 2 to 3 years, and the record retention requirements in EEOC’s regulations mirror those periods. EEOC could have used its rulemaking discretion to establish uniform record retention requirements (e.g., 5 or 10 years) for all of the statutes instead of the variable periods for the different statutes. This approach could have helped eliminate what the company viewed as an inconsistency between the requirements and the way EEOC pursues cases. However, it is not clear whether regulated entities would view a record retention requirement that is longer than the current requirement as being less burdensome. To determine how regulated entities would have viewed such a requirement (and therefore whether EEOC could have developed a less burdensome regulatory approach), we would have had to conduct an in depth review of those entities’ views regarding record retention. Such a review would have required time and resource commitments that were beyond the scope of this assignment. “ (1) to provide a clear and comprehensive national mandate for the elimination of discrimination against individuals with disabilities; (2) to provide clear, strong, consistent, enforceable standards addressing discrimination against individuals with disabilities; (3) to ensure that the ederal overnment plays a central role in enforcing the standards established in this chapter on behalf of individuals with disabilities; and (4) to invoke the sweep of congressional authority, including the power to enforce the fourteenth amendment and to regulate commerce, in order to address the major areas of discrimination faced day-to-day by people with disabilities.” According to 29 U.S.C. 621, the purpose of ADEA is “to promote employment of older persons based on their ability rather than age; to prohibit arbitrary age discrimination in employment; to help employers and workers find ways of meeting problems arising from the impact of age on employment.” “ he Congress hereby finds that the existence in industries engaged in commerce or in the production of goods for commerce of wage differentials based on sex—(1) depresses wages and living standards for employees necessary for their health and efficiency; (2) prevents the maximum utilization of the available labor resources; (3) tends to cause labor disputes, thereby burdening, affecting, and obstructing commerce; (4) burdens commerce and the free flow of goods in commerce; and (5) constitutes an unfair method of competition.” Title VII of the Civil Rights Act of 1964 contained no statement of purpose. Bank B officials said some bank regulations give nonbanks (e.g., investment brokerage firms) an unfair competitive edge in the marketplace. For example, they said one regulation requires banks to disclose the risks faced by consumers with certain investment products, although investment firms are not required to make similar disclosures. In a recent 60 second media advertisement for Bank B, the bank officials said about a quarter of the airtime they bought had to be spent publicizing regulatory issues (e.g., rates and term disclosures). They said a nonbank could have spent the same advertising time simply selling its products and services. In our December 1996 report, OCC officials said that the examples of competitive inequality cited by Bank B are due to the fact that banks and nonbanks operate under different statutory schemes. During this review, OCC officials said banks operate as federally insured financial institutions and nonbanks do not. Therefore, they said it is appropriate for banking agencies to adopt additional disclosure requirements that address the unique features of the banking industry. OCC officials noted that the different disclosures provided by banks stem not from a regulation but from a policy statement, issued jointly by OCC and the other banking agencies in 1994, that provides guidance to the industry concerning practices that are consistent with safe and sound banking practices. Issuing the policy statement was an exercise of the authority of the OCC and other banking agencies to determine what constitutes safe and sound banking practices pursuant to 12 U.S.C. 1818. OCC officials also said that because banks offer both insured and uninsured investment products, it is important that banks inform consumers whether a given product is insured. Failure to do so could constitute an unsafe and unsound banking practice, resulting in liability to the bank or, at a minimum, damage to the bank’s reputation. OCC officials noted that OCC and other banking agencies issued the policy statement in question to alert banks about the potential problems in this area and to suggest practices—including providing the disclosures noted by Bank B officials—that can help banks avoid these problems. OCC officials concluded that it is appropriate to continue tailoring the disclosures provided to purchasers of investment products according to whether there is a significant risk of confusion over whether a product is insured. The issue that we focused on in this concern is OCC’s assertion that differences in “statutory schemes” between banks and nonbanks require differences in their disclosure requirements. We believe that the statutes underlying the interagency policy statement gave the banking agencies broad discretion in developing the disclosure requirements. OCC officials indicated that the banking agencies issued the policy statement under their authority in 12 U.S.C. 1818 to determine whether a given practice is consistent with safe and sound banking. Given the scope of this authority and because the disclosure requirements in the policy statement appear related to the agency’s authority, we concluded that the statutes gave OCC and the other the banking agencies broad discretion to issue the policy statement requiring the disclosures at issue in this concern. Also, because OCC’s and the other banking agencies’ statutory authority does not extend to nonbanks, the policy statement does not apply to those institutions. We believe that the interagency policy statement requiring certain types of disclosures for nondeposit investment products is within the broad rulemaking authority granted to OCC and the other banking agencies by the underlying statutes. For example, the policy statement requires, among other things, that insured depository institutions disclose that certain products (1) are not insured by FDIC; (2) are not a “deposit or other obligation of, or guaranteed by, the depository institution;” and (3) are subject to investment risk, including possible loss of the principal amount invested. The policy statement also says that the disclosures should be provided to customers during any sales presentation and in advertisements and other promotional materials. Because the underlying statutes give OCC and the other banking agencies the authority to take the actions that they believe are necessary to remedy or prevent unsafe and unsound banking practices, we believe the policy statement is within OCC’s statutory authority. We could not determine whether OCC and the other banking agencies could have developed disclosure requirements that would have been less burdensome to regulated entities while still accomplishing the underlying purpose of the statutes. To do so we would have had to conduct a detailed review of each disclosure requirement and determine how important it was to consumers in understanding which products are insured and which are not. Such a review would require significant time and resource commitments that were beyond the scope of this review. The above-cited statutory provisions do not contain a statement of purpose. Alan N. Belkin, Assistant General Counsel James M. Rebbe, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed federal agencies' assertions that certain private-sector regulatory concerns were, at least in part, attributable to underlying statutes, focusing on: (1) the amount of discretion the underlying statutes gave the rulemaking agencies in developing the regulatory requirements that the agencies had said were attributable to the underlying statutes; (2) whether the regulatory requirements at issue were within the authority granted by the underlying statutes; and (3) whether the rulemaking agencies could have developed regulatory approaches that would have been less burdensome to the regulated entities while still meeting the underlying statutory requirements. GAO noted that: (1) the statutes underlying 13 of the 27 regulatory concerns that it examined gave the rulemaking agencies no discretion in establishing the regulatory requirements at issue; (2) in these cases, the underlying statutes specifically stated what the regulated entities must do, and, by inference, what the related regulations must require; (3) the underlying statutes for 12 of the 27 concerns gave the agencies some discretion in developing the regulatory requirements at issue; (4) in these cases, the agencies often had no rulemaking discretion with regard to certain issues but had some or broad discretion regarding other issues; (5) the agencies had broad discretion in developing the regulatory requirements at issue in the two remaining concerns; (6) the regulatory provisions the agencies developed in relation to all of the 27 company concerns were within the authorities granted by the underlying statutes; (7) for those concerns in which the underlying statements gave the agencies no discretion as to how the associated regulations could be developed, those regulations were consistent with, and often mirrored, the specific requirements in the statutes; (8) for those concerns in which the statutes gave the agencies some or broad rulemaking discretion, the regulations did not appear to exceed the discretion allowed in those statutes; (9) for the 13 concerns for which GAO concluded agencies had no discretion, it also concluded that there were no less burdensome regulatory approaches available to the agencies that would have met the requirements of the statutes; (10) GAO could not determine whether less burdensome regulatory approaches were available for the remaining 14 of the 27 concerns, for which the statutes gave the agencies some or broad rulemaking discretion; (11) to make those determinations, GAO would have had to conduct a detailed examination of the implementation of each of the regulatory provisions that the agencies had said were attributable to the underlying statutes and the implications of alternative approaches--analyses that would have required time and resource commitments that were beyond the scope of this review; and (12) although this review focused on only 27 regulatory concerns, GAO believes that it can offer insights into some broader issues. |
Long-term fiscal simulations by GAO, Congressional Budget Office (CBO), and others all show that despite a 3-year decline in the federal government’s unified budget deficit, we still face large and growing structural deficits driven primarily by rising health care costs and known demographic trends. In fact, our long-range challenge has grown in the past three years and the projected tsunami of entitlement spending is closer to hitting our shores. The long-term fiscal challenge is largely a health care challenge. Although Social Security is important because of its size, the real driver is health care spending. It is both large and projected to grow more rapidly in the future. GAO’s current long-term simulations show ever-larger deficits resulting in a federal debt burden that ultimately spirals out of control. Figure 1 shows two alternative fiscal paths. The first is “Baseline extended,” which extends the CBO’s August baseline estimates beyond the 10-year projection period, and the second is an alternative based on recent trends and policy preferences. Our “Alternative simulation” assumes action to return to and remain at historical levels of revenue and reflects somewhat higher discretionary spending than in Baseline extended and more realistic Medicare estimates for physician payments than does the Baseline extended scenario. Although the timing of deficits and the resulting debt build up varies depending on the assumptions used, both simulations show that we are on an imprudent and unsustainable fiscal path. The bottom line is that the nation’s longer-term fiscal outlook is daunting under any realistic policy scenario or set of assumptions. Continuing on this unsustainable fiscal path will gradually erode, if not suddenly damage, our economy, our standard of living, and ultimately our national security. Our current path also increasingly will constrain our ability to address emerging and unexpected budgetary needs and they serve to increase the burdens that will be faced by future generations. Although Social Security, Medicare, and Medicaid dominate the long-term outlook, they are not the only federal programs or activities that bind the future. The federal government undertakes a wide range of responsibilities, programs, and activities that may either obligate the government to future spending or create an expectation for such spending. In fact, last year the U.S. government’s major reported liabilities, social insurance commitments, and other fiscal exposures continued to grow. They now total approximately $50 trillion—about four times the nation’s total output (GDP) in fiscal year 2006—up from about $20 trillion, or two times GDP in fiscal year 2000. (See fig. 2.) Absent meaningful reforms, these amounts will continue to grow every second of every minute of every day due to continuing deficits, known demographic trends, and compounding interest costs. GAO, Fiscal Exposures: Improving the Budgetary Focus on Long-Term Costs and Uncertainties, GAO-03-213 (Washington, D.C.: Jan. 24, 2003). leadership. In addition to the proposal that both of you are offering, I’m pleased to say that several other members on both sides of the political aisle and on both ends of Capitol Hill are also taking steps to answer the call for fiscal prudence by proposing bills to accomplish similar objectives. I was pleased to join you when you announced this proposal. As I said at the time, I believe it offers one potential means to achieve an objective we all should share: taking steps to make the tough choices necessary to keep America great and to help make sure that our country’s, children’s and grandchildren’s future is better than our past. Senators Conrad and Gregg, thank you for your leadership. I was especially pleased to see that the task force that would be created by your legislation was informed by GAO’s work on the key elements necessary for any task force or commission to be successful. Last year we looked at several policy-oriented commissions. (See app. I for a summary table on that work.) Our analysis suggests that there are a number of factors that can increase the likelihood a commission will be successful. Examples of those factors—and elements your proposal encompasses— are a broad charter—don’t artificially limit what can be discussed and don’t set policy preconditions (like “must support individual accounts”) for membership, involvement of leaders from both the executive and legislative branches—including elected officials, a report with specific proposals and a requirement for supermajority vote to make recommendations to the President and the Congress, and a process to require consideration of the proposals. A few of these points deserve elaboration. Having a broad charter and no preconditions is very important. This means that “everything is on the table”—and that is critical in order for the effort to be credible and have any real chance of success. But let me be clear what we mean by “everything is on the table”—it means that everything is open for discussion and debate. It does not mean advance agreement to a specific level of revenues or benefit changes. The only precondition should be the end goal: to put the nation’s fiscal outlook back on a prudent and sustainable path for the future. I believe that having true bipartisanship and active involvement by both the executive and the legislative branches is important. If any proposal is seen as partisan or the product of only one branch, it is unlikely to fly with the American people. Candidly, based on my interactions with thousands of Americans from across the nation during the past two years, there is little confidence in the ability of elected officials to rise above partisan battles and ideological divides. As a result, I believe that any related commission or task force should also involve knowledgeable professionals from selected nonpartisan institutions who have significant expertise and experience. Finally, the task force or commission will need to move beyond diagnosis to prescription. We know the path must be changed. What we need now are credible and specific legislative proposals that will accomplish that. Furthermore, these should come from a supermajority of the task force or commission members with a mechanism to assure a vote on a majority basis by the Congress. At your request, we are looking at how other countries have reformed their entitlement programs—not the substance of their reforms but rather the process that led up to the reform. As countries have sought to reform entitlements such as pensions and disability, they have often used commissions as a means to develop reform proposals that became the basis for legislation. For example, the 2003 Rurup Commission in Germany, composed of experts, public officials, and others, made recommendations for reform of public pensions that were enacted in 2004 and 2007. In the Netherlands, the 2000 Donner Commission composed of respected public figures representing the major political parties developed recommendations that became the basis for major disability reform legislation enacted in 2005. In the early 1990s, a working group of parliamentary members in Sweden developed the concept of a major structural reform of their public pension system that was worked out in detail in succeeding years and enacted in 1998. In addition to these types of commissions, several countries also have permanent advisory bodies tasked with periodically informing the government on pension policy challenges and reform options. Our related work is not yet complete, but some of what we have found to date would not surprise you. These special groups—whether commissions or task forces—can and do fill multiple roles including public education, coalition building, “setting the table” for action, and providing a means for and cover to act. Leadership is key and public education is also important. You asked that we comment on some particulars—and on areas where we think further refinements would increase the chances of success. Let me now turn to three areas: timing and how to ensure involvement of the newly-elected President, congressional action: whether—and if so how—to permit amendments to or substitutes for the commission’s proposals, and the supermajority vote requirement, and the chairmanship of the commission. A great strength of your proposal is that it calls for the task force or commission to deliberate throughout 2008. As you know, members of the Fiscal Wake-Up Tour believe that fiscal responsibility and intergenerational equity must be a top priority for the new President. We all agree that finding solutions will require leadership, bipartisan cooperation, a willingness to discuss all options and courage to make tough choices. For example, those who argue that spending must come down from projected levels should explain which programs they would target and how the savings would be achieved. Those who argue for higher taxes should explain what level of taxation they are willing to support, the manner in which the new revenue would be raised and the mechanisms that will help to ensure that any additional revenues will be used in a manner that will help rather than hinder our effort to be fiscally responsible. Those who are unwilling to do either should explain how much debt they are willing to impose on future generations of Americans. Indeed, we have suggested a number of key questions we believe it is reasonable to ask the candidates. These include the following: What specific spending cuts, if any, do you propose and how much of the problem would they solve? What specific tax increases, if any, do you propose and how much of the problem would they solve? What is your vision for the future of Social Security and what strategies would you pursue to bring it about? What is your vision for the nation’s health care system, including the future of Medicare, and what strategies would you pursue to bring it about? These questions and others should be addressed by all the (presidential) candidates so the public can assess whether he or she appreciates the magnitude of the problem, the consequences of doing nothing (or making the problem worse), and the realistic trade-offs needed to find real and sustainable solutions. Although I believe the candidates should recognize the seriousness of this challenge, I also believe it is unrealistic to expect candidates to offer coherent, fully comprehensive proposals at this point in the campaign. In that sense the task force or a similar commission performs a great service: candidates could promise to take seriously any information or proposals and to engage in a constructive manner with the group after the election. They could agree that for the task force or commission to have a chance of succeeding “everything must be on the table” at least for discussion. That said, it is important to find a way to involve whoever is elected as our new President. After all, it will be the person elected approximately 53 weeks from now who must use the “bully pulpit” and put their energy and prestige behind the effort to help ensure success. Although I think having a deadline is important, I believe that a December 9, 2008, deadline for the commission’s report does not offer enough time for the kind of input and involvement that will be necessary. Some way must be found to gain the active involvement and buy-in of the incoming President. In any event, it seems likely that the December 2008 deadline would need to be replaced—perhaps with a January or February 2009 date. You also asked us to think about the current requirement for a “fast track” up-or-down vote in the House and Senate and the requirement for a supermajority in both houses. As former Congressman and former Office of Management and Budget (OMB) Director Leon Panetta has said, in any effort to change our fiscal path “nothing will be agreed to until everything is agreed to.” This statement also offers a warning about the dangers of picking apart any package. Whatever process is developed for considering the task force’s recommendations should protect the proposal from being picked apart amendment by amendment. The task force is charged with developing— and agreeing to—a coherent proposal which, taken as a whole, will put us on a prudent and sustainable long-term fiscal path. Presumably, to reach agreement, the members will have made compromises—any proposal is going to have elements that represent concessions by the various members. In all likelihood those concessions will have been made in return for concessions by others. If individual elements can be eliminated by amendment, the likelihood that the package will achieve its goal will be reduced. The very process of coming up with a coherent proposal means that the package is likely to stand or fall as a whole. In that sense the prohibition on amendments makes some sense. At the same time, I believe it would make sense to permit alternatives. I say alternatives not amendments because I believe it is important that any alternatives achieve the same change in fiscal path as the task force’s proposal. The SAFE bill proposed by Senator Voinovich and by Representatives Cooper and Wolf does permit alternatives—but it holds them to the same standards and criteria as the proposal from the commission. Permitting alternative packages to be offered and voted upon may increase the credibility and acceptance of the end result. The Task Force bill requires both a supermajority to report out a proposal and a supermajority in both houses to adopt the proposal. The supermajority requirement within the task force (or commission) offers assurance that any proposal has bipartisan support. It offers stronger backing for a proposal that must reflect difficult choices. If a proposal comes to the Congress with a two-thirds or three-fourths vote of the task force, the necessity for a supermajority vote to enact the proposal in the Congress is less clear. It is even possible that this requirement could offer the opportunity for a minority to derail the process. Any package that makes meaningful changes to our fiscal path is going to contain elements that generate significant opposition. Therefore, although I think requiring a supermajority within the task force makes sense, requiring a supermajority vote for enactment of the task force or commission’s proposal by the Congress is inappropriate. In my view, such a requirement puts too many hurdles in the way of making tough choices and achieving necessary reforms. Finally, Chairman Conrad, Senator Gregg, let me raise a question about the role envisioned for the outgoing Administration. I believe you are correct to include executive branch officials. In this regard, I have the utmost respect for the current Secretary of the Treasury. I have met with him on several occasions and am well aware that he has made several statements about the need for action on our long-term fiscal challenge. At the same time, I believe that designating a cabinet official in an outgoing administration as the task force chairman presents some serious challenges and potential drawbacks. Both the strength and the weakness of having the Secretary of the Treasury participate is that he will be seen as representing the outgoing President. While participation by the executive branch at the highest level will be important, having an outgoing Administration official serve as chairman may serve to hinder rather than help achieve acceptance and enactment of any findings and recommendations. Given the fiscal history of the first 7 years of this century and the experience with the Commission to Strengthen Social Security, I would question whether having the Treasury Secretary or any other current Administration official serve as chairman is the right way to go. Before concluding, I would like to say a few words about what I hope is a renewed push to find a vehicle for addressing this very important challenge. Senator Voinovich has proposed the SAFE Commission. Its membership is different than your Task Force proposal but it seeks the same goal—improving our fiscal path. As I noted, Congressmen Cooper and Wolf have joined to introduce companion bills in the House: both to the SAFE Commission and to the Conrad-Gregg Bipartisan Task Force. As a result, both the Senate and the House have before them bills that seek to create vehicles for executive-legislative bipartisan development of credible, specific, legislative proposals to put us back on a prudent and sustainable fiscal path in order to ensure that our future is better than our past. We owe it to our country, children, and grandchildren to do no less. These are encouraging signs. I hope there is movement in this Congress. At the same time I think we must recognize that achieving and maintaining fiscal sustainability is not a one-time event. Even if a task force or commission is created and succeeds in developing a proposal and that proposal is enacted, it will be necessary to monitor our path. In that context I note that the proposal by Senators Feinstein and Domenici for a permanent commission would require periodic review and reporting of recommendations every 5 years to maintain the adequacy and long-term solvency of Social Security and Medicare. In our work looking at other countries we note that reform is an ongoing process and that no matter how comprehensive initial reforms, some adjustments are likely to be necessary. Something like the ongoing commission suggested by Senators Feinstein and Domenici may be a good companion and follow-on to the Task Force/Commissions envisioned by either the Bipartisan Task Force or the SAFE Commission bills. We will need to be flexible in our response to early challenges and success as we move forward. Changing our fiscal path to a prudent and sustainable one is hard work and achieving reform requires a process with both integrity and credibility. In our work on other countries’ entitlement reform efforts, we see that reforms are sometimes the culmination of earlier efforts that may have seemed “unsuccessful” at the time. For example, a 1984 Swedish commission on pension reform did not reach consensus on a proposal but its work helped set the stage for a process that resulted in a major reform. Similarly, the recent reforms of public pensions in Germany and disability in the Netherlands built upon a long series of incremental reform changes. Each reform effort can move the process forward and each country must find its own way. Today we can build on previous efforts in the United States. In this country we have been discussing Social Security reforms and developing reform options since the mid-1990s. We have had two major commissions on entitlement reform in the last decade—a Presidential commission on Social Security in 2001 and a Congressional commission on Medicare in 1998. There have also been discussion, studies and commissions on tax reform. As we said in our report on the December 2004 Comptroller General forum on our nation’s long-term fiscal challenge, leadership and the efforts of many people will be needed to change our fiscal path. The issues raised by the long-term fiscal challenge are issues of significance that affect every American. By making its proposal, this Committee has shown the kind of leadership that is essential for us to successfully address the long-term fiscal challenge that lies before us. The United States is a great nation, possibly the greatest in history. We have faced many challenges in the past and we have met them. It is a mistake to underestimate the commitment of the American people to their country, children, and grandchildren; to underestimate their willingness and ability to hear the truth and support the decisions necessary to deal with this challenge. We owe it to our country, children and grandchildren to address our fiscal and other key sustainability challenges. The time for action is now. Mr. Chairman, Senator Gregg, members of the Committee, let me repeat my appreciation for your commitment and concern in this matter. We at GAO stand ready to assist you in this important endeavor. Restricted; required revenue neutrality and keeping incentives for homeownership and charitable giving, and encouraging savings; required to consider equity and simplicity too 32 (22/10) 17 (9/8) 10 (0/10) 16 (0/16); Included 3 former Members of Congress 9 (0/9); Chair and Vice-Chair were former Senators; 1 former House Representative on panel; also included 4 professors and 2 “tax practitioners” Yes, functionally; technically Breaux = Chair; Thomas = “Administrative Chair” No; proposed recommendations failed to gain required 11 votes (Jan. 1995) (Dec. 2001) (July 2004) (Nov. 2005) No; but recommended 5 broad principles for crafting “solutions to our fiscal problems” n.a. Yes This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | GAO has for many years warned that our nation is on an imprudent and unsustainable fiscal path. During the past 2 years, the Comptroller General has traveled to 24 states as part of the Fiscal Wake-Up Tour. Members of this diverse group of policy experts agree that finding solutions to the nation's long-term fiscal challenge will require bipartisan cooperation, a willingness to discuss all options, and the courage to make tough choices. Indeed, the members of the Fiscal Wake-Up Tour believe that fiscal responsibility and intergenerational equity must be a top priority for the new President. Several bills have been introduced that would establish a bipartisan group to develop proposals/policy options for addressing the longterm fiscal challenge. At the request of Chairman Conrad and Senator Gregg, the Comptroller General discussed GAO's views on their proposal to create a Bipartisan Task Force for Responsible Fiscal Action (S. 2063). Long-term fiscal simulations by GAO, Congressional Budget Office (CBO), and others all show that despite some modest improvement in near-term deficits, we face large and growing structural deficits driven primarily by rising health care costs and known demographic trends. Under any realistic policy scenario or assumptions, the nation's longer-term fiscal outlook is daunting. Continuing on this unsustainable fiscal path will gradually erode, if not suddenly damage, our economy, our standard of living, and ultimately our national security. Our current path also increasingly will constrain our ability to address emerging and unexpected budgetary needs and increase the burdens that will be faced by future generations. As the Comptroller General stated when the bill was introduced, the Bipartisan Task Force for Responsible Fiscal Action offers one potential means to taking steps to make the tough choices necessary to keep America great, and to help make sure that our country's, children's, and grandchildren's future is better than our past. GAO noted that the bill incorporates key elements needed for any task force or commission to be successful: (1) a statutory basis, (2) a broad charter that does not artificially limit what can be discussed and does not set policy preconditions for membership, (3) bipartisan membership, (4) involvement of leaders from both the executive and legislative branches--including elected officials, (5) a report with specific proposals and a requirement for supermajority vote to make recommendations to the President and the Congress, and (6) a process to require consideration of the proposals. GAO also made some suggestions it believes could enhance the likelihood that the bill will achieve its overarching goals. GAO suggested the sponsors consider (1) including a way for the next President to be involved in the process of proposal development, (2) permitting alternative packages to be voted on that would achieve the same fiscal result, and (3) eliminating the requirement for a supermajority in Congress. With the same aim, GAO also expressed some reservations about the current approach to specifying the Task Force Chairman. |
The use of wireless phone service has grown rapidly in recent years. By the end of 2008, about 82 percent of adults lived in households with wireless phone service, up from 54 percent at the end of 2005. Furthermore, by the end of 2008, about 35 percent of households used wireless phones as their primary or only means of telephone service, of which about 20 percent had only wireless phones and the other 15 percent had landlines but received all or most calls on wireless phones. Consumers’ use of wireless phones for other purposes, such as text messaging, photography, and accessing the Internet, has also increased dramatically. For example, FCC reports that, while a subscriber’s average minutes of use per month grew from 584 to 769 from 2004 to 2007, the number of text messages grew more than tenfold during the same period. Within the wireless phone industry, four nationwide wireless phone service carriers—AT&T, Sprint, T-Mobile, and Verizon—operate alongside regional carriers of various size. The four major carriers serve more than 85 percent of wireless subscribers, but no single competitor has a dominant share of the market. As recently as 2007, more than 175 companies identified themselves as wireless phone service carriers. To subscribe to wireless phone service, a customer must select a wireless phone service carrier and either sign a contract and choose a service plan or purchase prepaid minutes and buy a phone that works with the prepaid service. Most customers sign contracts that specify the service plan and the number of minutes and text messages the customer is buying for a monthly fee. Also, new customers who sign contracts for wireless phone service sometimes pay up-front fees for “network activation” of their phones and usually agree to pay an “early termination fee” if they should quit the carrier’s network before the end of the contract period. In return for signing a contract, customers often receive wireless phones at a discount or no additional cost. In 1993, the Omnibus Budget Reconciliation Act (1993 Act) was enacted, creating a regulatory framework to treat wireless phone service carriers consistently and encourage the growth of a competitive marketplace. Specifically, the law required FCC to treat wireless carriers as common carriers but gave FCC authority to exempt wireless service carriers from specific regulations that apply to common carriers if FCC could demonstrate that doing so would promote competition, that the regulations were unnecessary to protect consumers, and that the exemption was consistent with the public interest. FCC has specific authority to regulate wireless phone service rates and market entry, while states are preempted from doing so; however, states may regulate the other “terms and conditions” of wireless phone service. The 1993 Act also directed FCC to require wireless carriers, like other common carriers, to provide service upon reasonable request and terms without unjust or unreasonable discrimination, as well as to adhere to procedures for responding to complaints submitted to FCC. Subsequently, the Telecommunications Act of 1996 authorized FCC to exempt wireless service carriers from these sections; however, in a 1998 proceeding to consider whether to exempt certain wireless phone service carriers from these requirements, FCC specifically stated that it would not do so, noting that these respective sections represented the “bedrock consumer protection obligations” of common carriers. FCC’s rules specify that the agency has both informal and formal complaint processes. FCC’s informal complaint process allows consumers to file complaints with FCC that the agency reviews and forwards to carriers for a response. The formal complaint process, which is similar to a court proceeding, requires a filing fee and is rarely used by consumers. State agencies also play a role in wireless phone service oversight. State utility commissions (sometimes called public utility commissions or public service commissions) regulate utilities, including telecommunications services such as wireless phone service and landline phone service. State commissions may also designate wireless phone service carriers as eligible telecommunications carriers (ETC)—a designation that allows carriers to receive universal service funds for serving consumers in high-cost areas. Through this process, state utility commissions may place conditions on how wireless carriers provide services in those high-cost areas in order for them to be eligible for such funds. State attorneys general broadly serve as the chief legal officers of states while also representing the public interest, and their work has included addressing wireless consumer protection issues. For example, in 2004, the attorneys general of 32 states entered into voluntary compliance agreements with Cingular Wireless (now AT&T), Sprint, and Verizon, under which the carriers agreed to disclose certain service terms at the point-of-sale and in their marketing and advertising, provide a service trial period, appropriately disclose certain taxes and surcharges on customers’ bills, and respond to consumers’ complaints and inquiries. According to our consumer survey, overall, wireless phone service consumers are satisfied with the service they receive. Specifically, we estimate that 84 percent of adult wireless users are very or somewhat satisfied with their wireless phone service and that approximately 10 percent are very or somewhat dissatisfied with their service (see fig. 2). Stakeholders we interviewed identified a number of aspects of wireless phone service that consumers have reported problems with in recent years. We identified five key areas of concern on the basis of these interviews and our review of related documents, and we subsequently focused our nationwide consumer survey on these areas (see table 1). Based on our survey results, we estimate that most wireless phone users are satisfied with these five specific aspects of service (see table 2). For example, we estimate that 85 percent of wireless phone users are very or somewhat satisfied with call quality, while the percentages of those very or somewhat satisfied with billing, contract terms, carrier’s explanation of key aspects of service at the point of sale, and customer service range from about 70 to 76 percent. Additionally, we estimate that most wireless phone users are satisfied with their wireless phone service coverage. For example, we estimate that 86 to 89 percent of wireless phone users are satisfied with their coverage when using their wireless phones at home, at work, or in their vehicle. While we estimate that about three-fourths or more of wireless phone service users are satisfied with specific aspects of their service, the percentages of those very or somewhat dissatisfied range from about 9 to 14 percent, depending on the specific aspect of service. For example, we estimate that 14 percent of wireless phone users are dissatisfied with the terms of their service contract or agreement. While the percentages of dissatisfied users appear to be small, they represent millions of people since, according to available estimates, the number of adult wireless phone service users is over 189 million. Other results of our survey suggest that some wireless phone consumers have experienced problems with billing, certain service contract terms, and customer service recently—that is, during 2008 and early 2009. Specifically, our survey results indicate the following: Billing. We estimate that during this time about 34 percent of wireless phone users responsible for paying for their service received unexpected charges and about 31 percent had difficulty understanding their bill at least some of the time. Also during this time, almost one-third of wireless users who contacted customer service about a problem did so because of problems related to billing. Service contract terms. Among wireless users who wanted to switch carriers during this time but did not do so, we estimate that 42 percent did not switch because they did not want to pay an early termination fee. Customer service. Among those users who contacted customer service, we estimate that 21 percent were very or somewhat dissatisfied with how the carrier handled the problem. Our analysis of FCC consumer complaint data also indicates that billing, terms of the service contract, and customer service are areas where wireless consumers have experienced problems in recent years. Furthermore, FCC complaint data indicate that call quality is an area of consumer concern. Specifically, our analysis of FCC data indicates that the top four categories of complaints from 2004 through 2008 regarding service provided by wireless carriers were billing and rates, call quality, early termination of contracts, and customer service, as shown in figure 3 (see app. II for additional discussion of FCC wireless consumer complaint data). Our survey of state utility commissions also found that billing, contract terms, and quality of service were the top categories of consumer complaints related to wireless phone service that commissions received in 2008. Specifically, among the 21 commissions that track wireless consumer complaints, 14 noted billing, 10 noted contract terms, and 10 noted quality of service as among the top three types of complaints commissions received in 2008. Additionally, 3 commissions specifically cited early termination fees as one of the top three categories of complaints they received in 2008. In response to the areas of consumer concern noted above, wireless carriers have taken a number of actions in recent years. For example, officials from the four major carriers—AT&T, Sprint, T-Mobile, and Verizon—reported taking actions such as prorating their early termination fees over the period of the contract, offering service options without contracts, and providing Web-based tools consumers can use to research a carrier’s coverage area, among other efforts. In addition, in 2003, the industry adopted a voluntary code with requirements for dealing with customers and, according to CTIA–The Wireless Association, the wireless industry spent an average of $24 billion annually between 2001 and 2007 on infrastructure and equipment to improve call quality and coverage. Also, carriers told us they use information from third-party tests and customer feedback to determine their network and service performance and identify needed improvements. (See app. III for additional information about industry actions to address consumer concerns.) Representatives of state agencies and various consumer and industry associations we interviewed expressed concern to us that many of the actions the industry has taken to address consumers’ concerns are voluntary and have not effectively addressed some major consumer concerns. For example, officials from some state public utility commissions indicated that there are no data to support the effectiveness of the wireless industry’s voluntary code and that this code lacks the level of oversight that state agencies can offer. Moreover, officials from state utility commissions and consumer associations we spoke with indicated that the industry’s actions to prorate early termination fees may be inadequate because the fees are not reduced to $0 over of the course of the contract period. Furthermore, some representatives of state agencies and consumer groups suggested that the industry has taken voluntary actions such as adopting the voluntary code and prorating early termination fees to avoid further regulation by FCC. Industry representatives, however, told us that the voluntary approach is more effective than regulation, since it gives the industry flexibility to address these concerns. FCC processes tens of thousands of wireless consumer complaints each year but has conducted little additional oversight of services provided by wireless phone service carriers because the agency has focused on promoting competition. The agency receives informal consumer complaints and forwards them to carriers for response; however, our consumer survey results suggest that most wireless consumers with problems would not complain to FCC and many do not know where they could complain. FCC has also not articulated goals and measures that clearly identify the intended outcomes of its complaint-processing effort. Consequently, if wireless consumers do not know where they can complain or what outcome to expect if they do, they may be confused about where to go for help or what assistance they can expect from FCC. Additionally, FCC cannot demonstrate how well it is achieving the intended outcomes of its efforts. While FCC monitors wireless consumer complaints by reviewing the top categories of complaints received, it has conducted few in-depth analyses to identify trends or emerging issues, impeding its ability to determine whether its rules have been violated or if new rules may be needed. FCC receives about 20,000 to 35,000 complaints each year related to services provided by wireless carriers, which the agency forwards to carriers for response. Given that our survey indicates that an estimated 21 percent of consumers who contact their carrier’s customer service about a problem are dissatisfied with the result, FCC’s efforts to process complaints are an important means for consumers to get assistance in resolving their problems. After reviewing a complaint received, FCC responds by sending the consumer a letter about the complaint’s status. If FCC determines that the complaint is valid, the agency sends the complaint to the carrier and asks the carrier to respond to FCC and the consumer within 30 days. Once FCC receives a response from the carrier, the agency reviews the response, and if it determines the response has addressed the consumer’s complaint, it marks the complaint as closed. According to FCC officials, if the response is not sufficient, FCC contacts the carrier again. FCC officials told us they consider a carrier’s response to be sufficient if it responds to the issue raised in the consumer’s complaint; however, such a response may not address the problem to the consumer’s satisfaction. When FCC considers a complaint to be closed, it sends another letter to the consumer, which states that the consumer can call FCC with further questions or, if not satisfied with the carrier’s response, can file a formal complaint. FCC officials also told us that if a consumer is not satisfied, the consumer can request that FCC mediate with the carrier on his or her behalf; however, the letter that FCC sends to a consumer whose complaint has been closed does not identify mediation as an option. FCC closes most wireless phone service complaints within 90 days of receiving them. Specifically, according to FCC’s complaint data, the agency closed 61 percent of complaints received in 2008 within 90 days (see fig. 4). FCC uses several methods to inform consumers that they may complain to the agency about their wireless phone service and has taken steps to improve its outreach. According to FCC officials, the agency provides information on how to complain to FCC on its Web site and in fact sheets that are distributed through various methods, including its Web site. Also, in response to a recommendation from its Consumer Advisory Committee in 2003 to improve outreach to consumers about the agency’s process for handling complaints, FCC switched from using one complaint form to having multiple forms for different types of complaints to make filing complaints easier for consumers. FCC also made its complaint forms and fact sheets available in Spanish and has distributed consumer fact sheets at outreach events and conferences. Furthermore, the agency created an e- mail distribution list for disseminating consumer information materials, which it used to inform consumers about the revised complaint forms. We have previously noted that it is important for an agency’s consumer protection efforts to inform the public effectively and efficiently about its role and how to seek redress. Additionally, we have reported on various ways an agency can communicate with the public about its efforts, including how exploring multiple methods for communicating with the public may improve public outreach. Such outreach methods can include making effective use of Web sites, e-mail listserves, or other Web-based technologies like Web forums, as well as requiring relevant companies to provide information to their customers. For example, many state utility commissions require landline carriers to include information on customers’ bills about how to contact the commission with a complaint. Despite FCC’s efforts to improve its outreach, these efforts may not be adequately informing the public about the agency’s role in handling consumer complaints. Specifically, based on the results of our consumer survey, we estimate that 13 percent of adult wireless phone users would complain to FCC if they had a problem that their carrier did not resolve and that 34 percent do not know where they could complain. Therefore, many consumers that experience problems with their wireless phone service may not know to contact FCC for assistance or may not know at all whom they could contact for help. We reported these survey results in June 2009. In August 2009, noting our survey results, FCC sought public comment on whether there are measures the agency could take to ensure that consumers are aware of FCC’s complaint process, including whether FCC should require carriers to include information for consumers on their bills about how to contact FCC with a complaint. FCC’s goals and measures related to its efforts to process wireless consumer complaints do not clearly identify the intended outcomes of these efforts. The Government Performance and Results Act of 1993 (GPRA) requires an agency to establish outcome-related performance goals for its major functions. GPRA also requires an agency to develop performance indicators for measuring the relevant outcomes of each program activity in order for the agency to demonstrate how well it is achieving its goals. The key goal related to FCC’s consumer complaint efforts is to “work to inform American consumers about their rights and responsibilities in the competitive marketplace.” This key goal also has a subgoal to “facilitate informed choice in the competitive telecommunications marketplace.” According to FCC officials, “informed choice” means consumers are informed about how a particular telecommunications market works, what general services are offered, and what to expect when they buy a service. FCC’s measure related to its efforts to process wireless consumer complaints under this subgoal is to respond to consumers’ general complaints within 30 days, which reflects the time it takes FCC to initially respond to the consumer about the status of a complaint. The measure does not clearly or fully demonstrate FCC’s achievement of its goal to facilitate informed consumer choice. Instead, it is a measure of a program output, or activity, not of the outcome the agency is trying to achieve. Another subgoal is to “improve customer experience with FCC’s call centers and Web site.” While this subgoal does identify an intended outcome, FCC does not have a measure related to this outcome that pertains to consumers who complain about services provided by their wireless carrier. FCC officials told us that they do not measure customer experience with the agency’s call centers and Web sites but sometimes receive anecdotal information from customers about their experiences. We have previously reported that to better articulate results, agencies should create a set of performance goals and related measures that address important dimensions of program performance. FCC’s goals may not represent all of the important dimensions of FCC’s performance in addressing consumer complaints. A logical outcome of handling complaints is resolving problems or, if a problem cannot be resolved, helping the consumer understand why that is the case. However, it is not clear whether resolving problems is an intended outcome of FCC’s consumer complaint efforts. While FCC’s goals in this area indicate that informing consumers is a goal of the agency, some information from FCC implies that another intended outcome of these efforts is to resolve consumers’ problems. For example, FCC’s fact sheets state that consumers can file a complaint with FCC if they are unable to resolve a problem directly with their carrier, which may lead consumers to believe that FCC will assist them in obtaining a resolution. However, FCC officials told us that the agency’s role in addressing complaints, as outlined in the law, is to facilitate communication between the consumer and the carrier and that FCC lacks the authority to compel a carrier to take action to satisfy many consumer concerns. Thus, it is not clear if the intended outcome of FCC’s complaint-handling efforts is resolving consumer problems, fostering communication between consumers and carriers, or both. Furthermore, FCC has not established measures of its performance in either resolving consumer problems or fostering communication between consumers and carriers. For example, FCC does not measure consumer satisfaction with its complaint-handling efforts. Without clear outcome-related goals and measures linked to those goals, the purpose and effectiveness of these efforts are unclear, and the agency’s accountability for its performance is limited. As noted above, consumers may not know to contact FCC if they have a complaint about their wireless phone service. Additionally, because FCC has not clearly articulated the intended outcomes of its complaint- processing efforts, consumers may not know the extent to which FCC can aid them in obtaining a satisfactory resolution to their concerns, and since FCC’s letters to consumers do not indicate that mediation is available, consumers may not know that they can request this service from FCC. Consequently, consumers with wireless service problems may be confused about where to seek assistance and what kind of assistance to expect if they do know they can complain to FCC. FCC has few rules that specifically address services consumers receive from wireless phone service carriers, and in general, the agency has refrained from regulating wireless phone service in order to promote competition in the market. FCC’s rules include general requirements for wireless carriers to provide services upon reasonable request and terms and in a nondiscriminatory manner, and to respond to both informal and formal complaints submitted to FCC by consumers. FCC also has specific rules requiring wireless carriers and other common carriers to present charges on customers’ bills that are clear and nonmisleading, known as truth-in-billing rules. Additionally, FCC’s rules establish other consumer protections, such as requiring wireless carriers to provide enhanced 911 and other emergency services and number portability rules that allow customers to keep their phone numbers when switching between wireless carriers or between landline and wireless services. While FCC has rules that cover billing, the agency has not created specific rules governing other key areas of recent consumer concern that we identified (see table 3). According to FCC, the agency does not regulate issues such as carriers’ contract terms or call quality, since the competitive marketplace addresses these issues, leading carriers to compete on service quality and proactively respond to any related concerns from consumers. Additionally, having determined that exempting carriers from certain regulations will promote competition, FCC has used its authority under the 1993 Act to exempt wireless carriers from some rules that apply to other communications common carriers. For example, in 1994, FCC exempted wireless carriers from rate regulations that apply to other common carriers. FCC has stated that promoting competition was a principal goal of the 1993 Act under which Congress established the regulatory framework for wireless phone service oversight. As required by the 1993 Act, in exempting wireless phone service carriers from regulations in order to promote competition, as FCC has done, FCC must determine that such exemption is in the public interest and that the regulations are not necessary for the protection of consumers. FCC officials told us that the agency has taken a “light touch” in regulating the industry because it is competitive and noted that carriers compete with one another to provide better service. FCC proposed rules in 2005 for wireless carriers to address further regulation of billing practices and, in 2008, to address carriers’ reporting of service quality information such as customer satisfaction and complaint data. FCC has received comments on both proposals but has taken no further action to date. In August 2009, as part of its effort to seek comment on a number of telecommunications consumer issues, FCC sought comment on the effectiveness of its truth-in- billing rules and whether changes in these rules are needed. FCC monitors informal complaints submitted by consumers to determine whether further regulation is needed and if the wireless industry is complying with the agency’s rules, but such monitoring is limited. According to FCC officials, trends in consumer complaint data may alert them to the need for changes in regulation. Furthermore, FCC has acknowledged that when exempting telecommunications service providers, such as wireless carriers, from its regulations, the agency has a duty to ensure that consumer protection needs are still met. FCC’s Consumer and Governmental Affairs Bureau reviews the top categories of complaints reported in the agency’s quarterly reports of consumer complaints and looks for trends. FCC officials said that the agency does not routinely conduct more in-depth reviews of the nature of wireless consumer complaints unless they are needed to support an FCC decision- making effort, such as a rulemaking proceeding. FCC does not document its monitoring of consumer complaints and does not have written policies and procedures for routinely monitoring complaints. FCC has taken a number of actions to enforce its rules that apply to wireless phone service carriers, but the agency has conducted no enforcement of its truth-in-billing rules as they apply to wireless service. One of the agency’s performance goals is to enforce FCC’s rules for the benefit of consumers. According to representatives of FCC’s Enforcement Bureau, trends in consumer complaints that identify potential violations of FCC rules may signal the need for FCC to conduct an investigation, which could lead to an enforcement action. For example, in reviewing complaint data, the bureau identified five wireless carriers that had not responded to consumer complaints, which in 2008, led the agency to initiate enforcement actions against these carriers. However, Enforcement Bureau officials told us that they have not reviewed complaints to look for potential wireless truth-in-billing rules violations. Under the method it currently uses to categorize informal complaints, FCC cannot easily determine whether complaints may indicate a potential violation of FCC’s truth-in-billing rules. For example, FCC officials told us that while the agency uses category codes to identify types of complaints related to billing, such as codes for rates, line items, and fees, FCC officials would have to review complaints individually to determine whether they revealed a potential violation of its truth-in-billing rules—an analysis FCC has not conducted. Furthermore, according to FCC officials, since the application of the agency’s truth-in-billing rules to wireless carriers was expanded in 2005, the agency has conducted no formal investigations of wireless carriers’ compliance with these rules because investigating other issues has been a priority and FCC has received no formal complaints in this area. Since our consumer survey indicates that about a third of consumers responsible for paying their wireless bills have had problems understanding their bill or received unexpected charges, the enforcement of truth-in-billing rules is important for the protection of consumers. Lacking in-depth analysis of its consumer complaints, FCC may not be aware of trends or emerging issues related to consumer problems, if specific rules—such as the truth-in-billing rules—are being violated, or if additional rules are needed to protect consumers. Our standards for internal control in the federal government state that agencies should have policies and procedures as an integral part of their efforts to achieve effective results. Without adequate policies and procedures for conducting such analyses of its consumer complaints, FCC may not be able to ensure that its decisions to exempt carriers from regulation promote competition and protect consumers. Results of our survey of state utility commissions show that while most commissions process wireless consumer complaints, most do not regulate wireless phone service. Representatives of state utility commissions and other stakeholders we interviewed told us that states’ authority under federal law to regulate wireless phone service is unclear, and this lack of clarity has, in some cases, led to costly legal proceedings and some states’ reluctance to provide oversight. Additionally, based on the results of our survey, communication between these commissions and FCC regarding oversight of wireless phone service is infrequent. In response to our survey of 51 state utility commissions, 33 commissions reported receiving complaints about wireless phone service, which they process in different ways. Specifically, 20 of these commissions work with the consumer and/or wireless carrier to resolve wireless complaints, while the other 13 commissions that accept complaints forward the complaint or refer the consumer to the relevant wireless carrier or another government entity. States that forwarded complaints or referred consumers to other government entities most frequently did so to FCC or a state attorney general, with some complaints also going to the Federal Trade Commission, a state consumer advocate, or another state agency. State utility commission officials we spoke with in California, Nebraska, and West Virginia, which all accept complaints and work with carriers and consumers to resolve them, told us that they have access to higher-ranking carrier representatives than consumers who call the carriers directly. This access, they said, helps them resolve wireless consumer complaints in an effective and timely manner. Twenty-one of the 33 commissions that accept complaints reported recording and tracking the number and types of wireless phone service complaints they receive. Based on the responses of commissions to our survey, they received a total of 8,314 wireless service complaints in 2008. Most commissions do not regulate wireless phone service. As noted previously, under federal law, states may regulate “terms and conditions” of wireless phone service, although they are preempted from regulating rates and entry. In response to our survey, 19 commissions reported having rules (or regulations) for wireless phone service, either for telecommunications services generally, including wireless service, or wireless services specifically (see fig. 5). Few commissions have rules within the following five main areas related to the terms and conditions of wireless service we asked about in our survey: service quality, billing practices, contract or agreement terms and conditions, advertising disclosures, and disclosure of service terms and conditions. Specifically, the number of commissions that have rules in these areas ranges from 3 that have rules about disclosure of service terms and conditions to 15 that have rules about service quality (see fig. 6). While fewer than half of the commissions have wireless rules, most designate wireless carriers as eligible telecommunication carriers (ETC) to receive universal service funds for serving high-cost areas. Although ETC status is not required for a wireless carrier to operate in a high-cost area, it is required if the carrier wants to receive universal service funding. We previously reported that wireless carriers often lack the economic incentive to install wireless towers in rural areas where they are unlikely to recover the installation and maintenance costs, but high-cost program support allows them to make these investments. Most commissions place conditions on receiving these funds related to various aspects of service. Specifically, 41 commissions in our survey reported having processes to designate wireless carriers as ETCs, and 31 reported placing such conditions on carriers to receive these funds. For example, the Nebraska state commission requires designated wireless ETCs to submit reports about coverage, service outages, complaints, and their use of universal service funding. For each of the five main areas related to the terms and conditions of service we asked about, more commissions reported having conditions for wireless ETCs than rules for wireless carriers (see fig. 6). Such conditions would not apply to wireless carriers generally—only to those carriers designated as ETCs to provide services in high-cost areas. Few state utility commissions—five—reported taking enforcement action against wireless phone service carriers since the beginning of 2004. According to national organizations representing state agencies, states’ concerns about the cost of pursuing these issues in court have created a reluctance to do so. State utility commissions generally cannot regulate wireless phone service unless they are granted authority to do so by state law. According to our survey of state utility commissions, many state commissions do not have authority to regulate wireless phone service, and most that do have authority indicated that it is limited. Specifically, 21 commissions reported having authority to regulate wireless phone service, with 5 commissions indicating they have authority to regulate in all areas related to the terms and conditions of service (excluding those aspects of service preempted by federal law) and 16 indicating they have authority to regulate in some areas. Twenty-one commissions reported that they do not have wireless regulatory authority and another 9 commissions would not assert whether they did or did not have wireless regulatory authority for various reasons (see fig. 7). As discussed in the next section, according to some state officials, the lack of authority or limited authority in many states to regulate wireless phone service may be due to concerns about the lack of clarity in federal law regarding states’ authority to regulate wireless phone service. State authority under federal law to regulate wireless phone service is not clear, based on the views of stakeholders we interviewed, court cases, FCC proceedings, a 2005 FCC task force report, and comments in our survey of state utility commissions. As discussed earlier, in 1993, Congress developed a wireless regulatory framework that expressly prohibited states from regulating the market entry or rates charged by wireless phone service carriers, while retaining states’ authority to regulate other “terms and conditions” of wireless service. In an accompanying report, Congress stated that “terms and conditions” was intended to include billing practices and disputes, as well as other consumer protection matters. The report further stated that examples of service it provided that could fall within a state’s lawful authority under “terms and conditions” were illustrative and not meant to preclude other matters generally understood to fall under “terms and conditions.” Despite this guidance, whether specific aspects of service are considered “rates” or “terms and conditions” has been the subject of disputes at FCC, in state regulatory bodies, and in the courts. For example, courts have recently been grappling with cases about whether billing line items and early termination fees are defined as “rates,” and are therefore not subject to state regulation, or as other “terms and conditions,” which may be regulated by states. Such cases have not resolved the issue, as courts have reached different conclusions about the meaning of these terms or await action by FCC. (See app. IV for examples of legal proceedings that address states’ authority to regulate terms and conditions of wireless phone service.) FCC has provided limited guidance about the meaning of “terms and conditions.” The agency did offer preliminary observations in response to petitions states filed with FCC seeking to continue regulating wireless rates and in a few other proceedings. For example, in 1995, FCC noted that while states could not set or fix wireless rates in the future, they could process consumer complaints under state law because “terms and conditions” was flexible enough to allow states to continue in this role. FCC has also said that states may designate wireless carriers as ETCs and that states may impose consumer protection requirements on wireless carriers as a condition for ETC designation. In 1999, FCC concluded that billing information, practices, and disputes fall within these other terms and conditions. Subsequently, in 2005, as part of its truth-in-billing proceeding, FCC concluded that regulation of line items by states constituted rate regulation, thereby preempting state authority; however, this conclusion was rejected by the Eleventh Circuit Court of Appeals. In this proceeding, FCC also asked commenters to address the proper boundaries of “other terms and conditions” and to describe what they believe should be the roles of FCC and the states in defining carriers’ billing practices. However, this proceeding is still open, and FCC has taken no further action to define the proper role of states in regulating billing practices. The lack of clarity regarding states’ authority to regulate wireless service has led to delays in deciding some legal matters and some states’ reluctance to provide oversight. In some instances, when hearing cases involving early termination fees, courts have halted proceedings pending FCC’s resolution of its own proceedings examining whether such fees should be defined as “rates” or “terms and conditions.” For example, in 2008, rather than issue a ruling, a U.S. District Court in the state of Washington deferred to FCC a case against a wireless carrier involving early termination fees, citing FCC’s primary jurisdiction over the issue. According to FCC officials, when courts defer cases to FCC, the agency does not automatically address the issue, but requires that a party file a petition asking FCC to do so. Officials of national organizations representing state agencies and officials from state agencies we interviewed told us that some states are reluctant to regulate wireless phone service until their authority is clarified. This is due, in part, to the potential legal costs that could be incurred if their authority is challenged in court by the industry. Such reluctance may lead to less consumer protection in certain states that otherwise might issue regulations. As we have previously reported, to develop an efficient and effective regulatory framework, the appropriate roles of participants, including states, should be identified. Because of the lack of clarity noted above, various stakeholders have expressed a desire for clearer roles for FCC and the states in providing wireless phone service oversight. For example, officials of national organizations representing state agencies, as well as officials from state agencies we interviewed, told us that clarity from Congress or FCC about the scope of state authority in regulating wireless phone service is needed. Some industry representatives also told us that there should be better guidance on the respective roles of state and federal agencies. A report by the FCC Wireless Broadband Access Task Force in 2005 recommended that FCC further clarify states’ authority to regulate “terms and conditions,” saying ambiguity about this authority has resulted in several disputes at FCC, in state regulatory bodies, and in the courts, and has caused significant regulatory uncertainty that will adversely affect investment in and deployment of wireless networks and other services. In 2005, CTIA–The Wireless Association petitioned FCC to declare that early termination fees are rates, and FCC sought comment on the petition. Recently, when CTIA–The Wireless Association withdrew its petition, four consumer groups opposed its withdrawal, hoping that FCC would offer some clarity on whether early termination fees are subject to state laws and regulations in order to help resolve some pending state lawsuits. State, consumer, and industry stakeholders hold varying views about how the meaning of “terms and conditions” should be clarified, which would affect states’ authority to regulate wireless phone service. Industry representatives argue that “terms and conditions” should be defined narrowly, which would preempt states’ ability to regulate aspects of wireless phone service that fall outside the definition. For example, industry representatives have stated that early termination fees and billing line items should be considered “rates,” rather than “terms and conditions,” which would preclude state utility commissions from regulating these aspects of service. In general, industry representatives have supported regulation at only the federal level, which they claim would avoid inconsistent state regulatory requirements they say would add to their costs. In contrast, state agency representatives and some consumer organizations have supported clarifying the meaning of “terms and conditions” to broadly encompass various aspects of wireless phone service, since they oppose efforts to preempt states’ regulatory authority. For example, state consumer advocates and consumer organizations have argued that aspects of service such as early termination fees and billing line items should fall within the definition of “terms and conditions” of service that states have authority to regulate. These representatives argue that states should have authority to create and enforce wireless phone service regulations, since they claim states are better positioned to effectively address consumers’ problems. Based on the results of our survey of state utility commissions, communication between FCC and state commissions about wireless phone service oversight is infrequent. Eleven state commissions indicated they had communicated with FCC about wireless phone service oversight issues during the last 6 months of 2008, and 33 commissions reported they had no contact with FCC about wireless phone service oversight during that time. Four of the 11 state commissions reported having communication with FCC during that 6-month period about wireless phone service complaints the state commissions had received from consumers. State utility commission officials we interviewed in California, Nebraska, and West Virginia said there was a need for better communication between FCC and the states regarding wireless phone service oversight, and the National Association of Regulatory Utility Commissioners has called for more focused and routine dialogue between FCC and the states, including a formal process to discuss jurisdictional issues. While FCC officials told us they routinely coordinate with state utility commissions about the handling of wireless complaints, they have no written policies or procedures on how they communicate with the states about wireless phone service oversight issues. FCC officials do participate in monthly conference calls with state utility commissions and state attorneys general during which wireless phone service oversight issues can be discussed. However, the state utility commission organizer of this conference call told us that wireless issues are rarely discussed, in part because few states actively regulate wireless phone service. Communication between federal and state agencies that share oversight of a particular industry—such as between FCC and state utility commissions—can be useful for sharing expertise and information, such as data on consumer complaints that could be used to identify problems that may warrant regulatory oversight. As noted earlier, federal law provides that oversight of wireless phone service is a responsibility shared by FCC and the states. Also FCC, in issuing its rules for implementing the wireless regulatory framework created by the 1993 Act, agreed with a suggestion by the National Association of Regulatory Utility Commissioners that state and federal regulators should cooperate in monitoring the provision of wireless services and share monitoring information. We previously reported that collaboration between agencies tasked with shared responsibilities produces more public value than independent actions by such agencies. These practices include identifying and addressing needs by leveraging resources to support a common outcome and agreeing on roles and responsibilities in agency collaboration. Additionally, we have recently developed a framework with characteristics of an effective system for providing regulatory oversight. One characteristic of this framework is a systemwide focus—among both federal and state regulators—with mechanisms for identifying consumer concerns that may warrant regulatory intervention, while another characteristic is an efficient and effective system within which the appropriate role of the states has been considered, as well as how the federal and state roles can be better harmonized. Without effective communication between FCC and state regulators, FCC may not be able to ensure such focus and clear delineation of the federal and state roles. Without written policies and procedures for how FCC communicates with states about wireless phone service oversight, FCC may be missing opportunities to work with its state partners in conducting oversight, such as sharing complaint data that could be used for monitoring trends. This lack of communication may also limit FCC’s awareness of issues the states are encountering in their oversight of wireless carriers. Additionally, without clear awareness of state-level efforts, FCC may not be aware of inconsistencies among state oversight efforts that could indicate a need for changes in its regulations. Although the percentages of consumers dissatisfied with various aspects of their wireless phone service are small, these small percentages represent millions of people. By emphasizing its responsibility under the law to foster a competitive marketplace for wireless service, FCC has contributed to the industry’s growth and to innovative products and services that have benefited consumers. Nevertheless, FCC’s responsibility to protect consumers from harm remains critical, particularly given the growing numbers of wireless service consumers and the limited number of requirements governing key aspects of service that are currently of concern to consumers. FCC’s processing of consumers’ informal complaints may be an important means for dissatisfied consumers to get help, but as long as FCC lacks clear outcome-related goals and measures for this process, consumers do not know what they can expect from it, and FCC cannot demonstrate its effectiveness in assisting consumers who need help. While most states accept wireless consumer complaints, many do not work with the carrier and the consumer to resolve those complaints, making FCC’s efforts an important resource for consumers in those states that do not accept or work to resolve complaints. However, if, as our survey of wireless users suggests, most consumers are not aware they can complain to FCC, those with problems may not know how to seek a fair resolution. Furthermore, without policies and procedures to monitor consumers’ concerns and thereby identify problems that may warrant regulatory or enforcement action, the FCC cannot ensure that consumers are adequately protected under the competitive deregulatory framework the agency has fostered. Finally, without clear guidance for states on the extent of their regulatory authority under federal law, or policies and procedures for how to communicate with states about wireless phone service oversight, FCC could be missing opportunities to partner with state agencies in developing an effective regulatory system. The lack of clarity about states’ authority may discourage some states from taking action to protect consumers. While FCC does have efforts to assist consumers, leveraging state resources by clarifying state authority would better ensure that identified problems can be addressed effectively at either the state or the federal level. Additionally, policies and procedures to guide how FCC and the states communicate would help ensure that FCC and the states are sharing information to guide their oversight. Improved communication between FCC and state regulators could help both parties ensure they are providing effective oversight with a systemwide focus and clearer roles enabling them to better identify trends in complaints and emerging consumer concerns that may warrant changes in regulation. We are making the following five recommendations to the Chairman of the Federal Communications Commission: To improve the effectiveness and accountability of FCC’s efforts to oversee wireless phone service, direct the commission to 1. clearly inform consumers that they may complain to FCC about problems with wireless phone service and what they can expect as potential outcomes from this process, and expand FCC’s outreach to consumers about these efforts; 2. develop goals and related measures for FCC’s informal complaint- handing efforts that clearly articulate intended outcomes and address important dimensions of performance; and 3. develop and implement policies and procedures for conducting documented monitoring and analysis of consumer complaints in order to help the agency identify trends and emerging issues and determine whether carriers are complying with existing rules or whether new rules may be needed to protect consumers. To better ensure a systemwide focus in providing oversight of wireless phone service and improve FCC’s partnership with state agencies that also oversee this service, direct the commission to 4. develop and issue guidance delineating federal and state authority to regulate wireless phone service, including pulling together prior rulings on this issue; addressing the related open proceedings on truth- in-billing and early termination fees; and, if needed, seeking appropriate statutory authority from Congress; and 5. develop and implement policies and procedures for communicating with states about wireless phone service oversight. We provided a draft of this report to FCC for its review and comment. FCC provided written comments, which appear in appendix V. FCC agreed with our recommendation on monitoring and had no position on the others, but noted it has started to take steps to address the issues we raise in our report. In particular, FCC noted that its August 2009 notice of inquiry sought comment on a number of issues related to the findings and recommendations in this report. The agency views this action as the first step in implementing several of the report’s recommendations. Regarding clearly informing consumers about its complaint process and expanding outreach to consumers, FCC noted that its notice of inquiry sought comment on whether the agency should take measures to ensure that consumers are aware of its complaint process. Additionally, FCC noted that it intends to do more to better inform consumers of its services to assist consumers, including making it clear that consumers can request that FCC mediate with their carrier on their behalf. Regarding developing goals and measures that clearly articulate the intended outcomes of its complaint-handling efforts, FCC noted that it already has some performance measures for these efforts and, that since the outcome of each complaint varies depending on its particular circumstances, the appropriate performance measures for this effort should measure its procedural aspects rather than its substantive outcomes. We note, however, that as we indicated in this report, it is not clear to consumers what they can expect from FCC’s complaint process. Articulating the intended outcome of this process—whether it be to help consumers resolve their problems, facilitate communication between carriers and consumers, or both—would provide consumers with a better understanding of the purpose of this effort, as well as help the agency better demonstrate results. Regarding our recommendation to develop and implement documented monitoring of its consumer complaints, FCC noted that it has been working to make improvements to its complaint database, including its analytical tools, which will facilitate such monitoring. Regarding the development of guidance delineating federal and state authority to regulate wireless phone service, FCC noted that, in response to its August 2009 notice of inquiry, the agency is currently updating the public record regarding its truth-in-billing rules and carriers’ early termination fees, and expects to use this as the basis for potential federal regulatory action, which could include delineating areas within the states’ authority that the record indicates should be addressed. Regarding policies and procedures for communicating with states about wireless phone service oversight, FCC noted that it is always looking for new and better ways to communicate with its state partners and that its recent notice of inquiry also asks whether FCC can take further action to reach out to state, as well as federal, local, and tribal government entities. We also provided FCC a draft of this report’s related e-supplement, GAO-10-35SP, containing additional results of our surveys of consumers and state utility commissions. FCC indicated it did not have any comments in response to the e-supplement. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and major contributors to this report are listed in appendix VI. This report examines (1) consumers’ satisfaction with wireless phone service and problems they have experienced with this service, as well as the industry’s response to these problems; (2) the Federal Communication Commission’s (FCC) efforts to oversee services provided by wireless phone service carriers; and (3) state utility commissions’ efforts to oversee services provided by wireless phone service carriers. To respond to the overall objectives of this report, we interviewed FCC officials and reviewed documents obtained from the agency. We also reviewed relevant laws and FCC regulations. Additionally we interviewed individuals representing consumer organizations, state agencies, and the industry to obtain their views on wireless phone service consumer concerns and oversight efforts. Table 4 lists the organizations with whom we spoke. To obtain information about consumers’ satisfaction with wireless phone service and problems they have experienced with this service, we conducted a telephone survey of the U.S. adult population of wireless phone service users. Our aim was to produce nationally representative estimates of adult wireless phone service users’ (1) satisfaction with wireless service overall and with specific aspects of service, including billing, terms of service, carriers’ explanation of key aspects of service, call quality and coverage, and customer service; (2) frequency of problems with billing and call quality; (3) desire to switch carriers and barriers to switching; and (4) knowledge of where to complain about problems. Percentage estimates have a margin of error of less than 5 percentage points, unless otherwise noted. We conducted this survey of the American public from February 23, 2009, through April 5, 2009. A total of 1,143 completed interviews were collected, and calls were made to all 50 states. Our sampling approach included randomly contacting potential respondents using both landline and cell phone telephone numbers. Using these two sampling frames provided us with a more comprehensive coverage of adult cell phone users than if we had sampled from only one frame. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. Each sampled adult was subsequently weighted in the analysis to account statistically for all of the adult cell phone users of the population. The final weight applied to each responding adult cell phone user included an adjustment for the overlap in the two sampling frames, a raking adjustment to align the weighted sample to the known population distributions from the 2009 supplement of the U.S. Census Bureau’s Current Population Survey and the Centers for Disease Control and Prevention’s 2008 National Health Interview Survey, and an expansion weight to ensure the total number of weighted adults represent an estimated adult population eligible for this study. We conducted an analysis of the final weighted estimates from our survey designed to identify whether our results contain a significant level of bias because our results inherently do not reflect the experiences of those who did not respond to our survey—i.e., a nonresponse bias analysis. We compared unadjusted weighted estimates and final, nonresponse-adjusted weighted estimates of the proportion of U.S. adults’ cell phone usage to similar population estimates from the 2008 National Health Interview Survey, which also includes questions about household telephones and whether anyone in the household has a wireless phone. While we identified evidence of potential bias in the unadjusted weighted estimate, the final weighting adjustments appear to address this potential bias, and we did not observe the same level of bias when examining the final weighted estimates. Based on these findings, we chose to include final weighted estimates at the national level from our survey in the report. In addition, we identified all estimates in the report with margins of error that exceeded plus or minus 5 percentage points and we did not publish estimates with a margin of error greater than plus or minus 9 percentage points. Telephone surveys require assumptions about the disposition of noncontacted sample households that meet certain standards. These assumptions affect the response rate calculation. For this survey the response rate was calculated using the American Association of Public Opinion Research (AAPOR) Response Rate 3, which includes a set of assumptions. Based on these assumptions, the response rate for the survey was 32 percent; however, the response rate could have been lower if different assumptions had been made and might also be different if calculated using a different method. We used random digit dial (RDD) sampling frames that include both listed and unlisted landline numbers from working blocks of numbers in the United States. The RDD sampling frame approach cannot provide any coverage of the increasing number of cell-phone-only households and limited coverage of cell-phone-mostly households (i.e., households that receive most of their calls on cell phones in spite of having a landline). Because of the importance of reaching such households for this survey about wireless phone service, we also used an RDD cell phone sampling frame. The RDD cell phone sampling frame was randomly generated from blocks of phone numbers that are dedicated to cellular service. About 43 percent of the completed interviews were from the RDD cell phone sample. Because many households contain more than one potential respondent, obtaining an unbiased sample from an RDD frame of landline numbers requires interviewing a randomly selected respondent from among all potential respondents within the sampled household (as opposed to always interviewing the individual who initially answers the phone). We obtained an unbiased sample by using the most recent birthday method, in which the interviewer asks to speak to the household member aged 18 or older with a wireless phone who had the most recent birthday. If the respondent who was identified as the member of the household with the most recent birthday was unavailable to talk and asked to schedule a callback, the call representative recorded the person’s name and preferred telephone number for the callback. There were also cases when a respondent from the cell phone sample asked to be called back on his or her landline. These respondents, if they completed the survey, were considered a completed interview from the cell phone sample. There were no respondent selection criteria for the cell phone sample; each number dialed from the cell phone sample was assumed to be a cell phone number, and each cell phone was assumed to have only one possible respondent to contact. The results of this survey reflect wireless phone users’ experience with their current or most recent wireless phone service from the beginning of 2008 through the time they were surveyed. Not all questions were asked of all respondents. For example, questions about the prevalence of billing problems were asked only of respondents who indicated they were solely or jointly responsible for paying for their service. Additionally, satisfaction with wireless coverage for particular locations (i.e. at home, at work, and in a vehicle) was calculated only among respondents who indicated they used their wireless phone service in those locations. The survey and a more complete tabulation of the results can be viewed by accessing GAO-10-35SP. To identify the type and nature of problems consumers have experienced in recent years with their wireless phone service, we interviewed officials from FCC, consumer organizations, national organizations that represent state agency officials, and state agency officials from three selected states—California, Nebraska, and West Virginia—representing utility commissions, offices of consumer advocates, and offices of attorneys general (see table 4). We selected these states based on their varying geography, populations, region, and approaches to overseeing wireless phone service, as indicated in part by information obtained from national organizations representing state agency officials. We also interviewed officials from the four major wireless carriers, two selected smaller carriers that serve mostly rural areas, and wireless industry associations. In addition, we reviewed documents obtained from some of these sources. We also analyzed FCC’s wireless complaint data on complaints received from 2004 through 2008. We reviewed FCC’s processes for generating these data and checked the data for errors and inconsistencies. We determined that the data were sufficiently reliable for the purposes of this review. We also obtained the total number of wireless complaints received in 2008 by the 21 state utility commissions that record and track wireless phone service consumer complaints. While we did not assess the reliability of the state complaint data, we are providing the numbers of complaints states reported receiving for illustrative purposes. To identify major actions the industry has taken in recent years to address consumers’ concerns, we interviewed the industry organizations named above and reviewed related documentation (see table 4). We also requested service quality information from the four major carriers, including measures of network performance and the number and types of customer complaints. Carriers told us that this information is proprietary and sensitive, and as we did not obtain comparable information from all four carriers, we were not able to present any aggregate information based on these data. Additionally, we interviewed consumer, state, and federal stakeholders about the effectiveness of industry efforts to address consumers’ concerns (see table 4). To evaluate how FCC oversees wireless phone service, including the agency’s efforts to process complaints, monitor sources of information to inform policy decisions, and create and enforce rules, we interviewed FCC officials about these activities and reviewed related documentation obtained from these officials. We also reviewed relevant laws, regulations, and procedures, as well as FCC’s quarterly complaint reports, strategic plan, and budget with performance goals and measures. In addition, we reviewed requirements of the Government Performance and Results Act of 1993 and our prior recommendations on performance goals and measures and determined whether FCC’s efforts to measure the performance of its efforts to process consumer complaints are consistent with these requirements and recommendations. We also interviewed consumer, state, and industry stakeholders about their views on FCC’s efforts to provide oversight (see table 4). We focused our review on FCC’s oversight of wireless phone service issues that have been major areas of concern for consumers in recent years, specifically targeting consumer protection efforts and those actions related to how wireless carriers interact with and serve their customers. We did not assess how FCC oversees a number of other facets of the wireless industry, including competition, spectrum allocation, licensing, construction, technical issues such as interference, public safety, or the agency’s obligations under the Telephone Consumer Protection Act and the Controlling the Assault of Non-Solicited Pornography and Marketing Act. To describe state utility commissions’ efforts to oversee wireless phone service, we surveyed commissions in all 50 states and the District of Columbia. We conducted this survey from March 3, 2009, through April 1, 2009. We received responses from all 51 commissions, which we obtained through a Web-based survey we administered and subsequent follow-up with some states. The survey and a more complete tabulation of the results can be viewed by accessing GAO-10-35SP. To obtain illustrative information about these issues, we interviewed state officials in public utility commissions, consumer advocate offices, and offices of attorneys general in three selected states (California, Nebraska, and West Virginia). Although we met with the offices of the state attorneys general in the three selected states and a national organization representing state attorneys general, we did not attempt to assess the full breadth of involvement of state attorneys general in addressing wireless phone service consumer concerns. Overall, the number of informal consumer complaints FCC has received about the service provided by wireless phone carriers has decreased since 2004 (see table 5). FCC received 20,753 complaints about the service provided by wireless phone carriers in 2008, the second-lowest total since 2004. From our analysis of FCC data on complaints about the service provided by wireless phone carriers from 2004 through 2008, we identified specific problem areas that complaints cited within the major complaint categories: Billing and rates: Within this category, specific issues consumers complained about included problems obtaining credits, refunds, or adjustments to their bills; charges for minutes talking on a wireless phone; recurring charges on their bills; rates; and unauthorized or misleading charges. Of the nearly 55,000 billing complaints FCC received during this period, there were 28,000 focused on obtaining credits, refunds, or billing adjustments. FCC also received almost 9,000 billing complaints about charges for minutes talking on a wireless phone. Additionally, there were more than 5,500 complaints about recurring charges on consumers’ bills and more than 5,500 complaints about the rates they received from their wireless service providers. Finally, our analysis of FCC’s data identified more than 2,100 wireless complaints concerning unauthorized, misleading, or deceptive charges (known as “cramming”). Call quality: Within this category, the majority of consumers complained about three issue areas: the quality of wireless phone service in their local service area, the premature termination of calls (i.e., “dropped calls”), and the inability to use their wireless phone because of service interruption by wireless phone service providers. Specifically, of the more than 14,000 call quality complaints FCC received during this period, more than 7,300 were about the quality of wireless phone service in the local service area. FCC also received more than 3,200 complaints about dropped calls and more than 2,000 complaints about interruption of service by wireless service providers. Contract early termination: This category includes termination of wireless phone service by the consumer or by the carrier. Nearly 12,000, or just under 90 percent, of all terms-of-service contract complaints FCC received were about termination by consumers prior to the end of a specified contract term, which would result in an early termination fee. Customer service: Customer service complaints were the fourth largest category of complaints; however, FCC did not report customer service complaints as a top category of complaints in its quarterly reports from 2004 through 2008. In comparison, FCC identified carrier marketing and advertising as a top category of complaint in each year from 2004 through 2008, even though there were more customer service complaints in 2005, 2006, and 2007. An FCC official told us they did not include customer service complaints in the quarterly reports because they fell within the “other” category, which FCC does not report. FCC also indicated that the large decrease in the number of customer service complaints from more than 3,500 in 2007 to fewer than 500 in 2008 was due in part to the agency’s redesign of its complaint forms, which allows for more accurate coding of complaints under specific topics rather than placing them in the “service treatment” category FCC uses to track customer service issues. The wireless phone service industry has taken some actions to address the types of consumer concerns we identified. Specifically, in 2003, the industry adopted a voluntary code, and since then, carriers have taken other measures. Table 6 outlines how elements of the industry code and examples of subsequent major actions we identified among the four largest carriers correspond to the key areas of consumer concern we identified. Federal law provides that while a state may not regulate a wireless carrier’s rates or entry, it may regulate the other terms and conditions of wireless phone service. Section 332(c)(3)(A) of title 47 of the U.S. Code does not define what constitutes rate and entry regulation or what comprises other terms and conditions of wireless phone service. This has left it up to FCC and courts to further define which specific aspects of service fall within the scope of these respective terms. Recently, two areas have garnered much attention at FCC and in the courts—the ability of states to regulate billing line items and the imposition of early termination fees. However, clarity has not yet been achieved. One area of disagreement is whether billing line items, such as surcharges and taxes that appear on consumers’ wireless bills, should be considered a rate or a term and condition of service. In 2005, under its truth-in-billing proceeding, FCC held that state regulations requiring or prohibiting the use of line items for wireless carriers constituted rate regulation and therefore were preempted. In the same proceeding, FCC solicited comments on the proper boundaries of “other terms and conditions” within the statute and asked commenters to delineate what they believe should be the relative roles of FCC and the states in defining carriers’ proper billing practices. The National Association of State Utility Consumer Advocates challenged FCC’s preemption finding in court, and the United States Court of Appeals for the Eleventh Circuit (Eleventh Circuit) found that FCC had exceeded its authority. Specifically, the court found that the presentation of a line item on a bill is not a “charge or payment” for service, but rather falls within the definition of “other terms and conditions” that states may regulate. Subsequent to the Eleventh Circuit’s ruling, the Western District Court of Washington rejected the Eleventh Circuit’s analysis and concluded that FCC did not exceed its statutory authority when it preempted line-item regulation and that line items are charges. However, the United States Court of Appeals for the Ninth Circuit (Ninth Circuit) reversed the district court, finding that the Eleventh Circuit decision is binding outside of the Eleventh Circuit. Furthermore, the Ninth Circuit stated that it agreed with the Eleventh Circuit’s determination that how line items are displayed or presented on wireless consumers’ bills does not fall within the definition of “rates.” FCC has not responded to these court decisions, nor has FCC concluded its truth-in-billing proceeding. While FCC has received comments on its 2005 truth-in-billing proposal, it has taken no further action in this proceeding. Accordingly, the issue of how states may regulate billing line items remains unclear. In August 2009, as part of its effort to seek comment on a number of telecommunications consumer issues, FCC sought comment on the effectiveness of its truth-in-billing rules and whether changes in these rules are needed. Early termination fees are another area where the distinction between “rates” and “terms and conditions” is not clear. Wireless carriers routinely offer customers discounts on cell phones in exchange for the customer’s commitment to a 1- or 2-year contract. If the contract is canceled before the end of the contract term, the customer is generally charged a fee, commonly referred to as an early termination fee. The Western District Court of Washington, in recently considering an early termination fees case, noted that it is not clear whether a wireless service carrier’s early termination fees are within the preemptive scope of “rates charged” under the statute. The court noted that federal courts that have considered the matter appear to be split on the issue, citing the examples of a district court that found early termination fees to fall under “terms and conditions” and another district court that found them to be “rates charged.” Because of the ongoing FCC efforts in this area, the Western District Court of Washington halted its proceeding pending a determination from FCC about this issue. In 2005, FCC was drawn into this debate at the request of a South Carolina court. In February 2005, SunCom, a wireless carrier, at the request of the court, filed a petition with FCC on whether early termination fees are rates charged. In May 2005, FCC released a public notice seeking comments on this matter. Subsequently, the parties to the litigation entered into a settlement agreement and jointly requested that FCC dismiss the matter without further review. FCC issued an order terminating the proceeding; however, the agency noted that it had a similar petition under review that it intended to address “in the near future.” The similar petition was filed by CTIA–The Wireless Association in March 2005, asking for an “expedited” ruling on whether early termination fees are rates. FCC sought comments on the matter from interested parties, who have submitted over 37,000 filings in this proceeding. In view of the growing concern over early termination fees and the number of complaints that FCC receives from consumers on this issue, FCC held a hearing in June 2008. At this hearing, expert panelists testified on the use of early termination fees by communications service providers. A year after the hearing, CTIA–The Wireless Association notified FCC that it was withdrawing its petition, citing the evolution of the competitive wireless marketplace as a reason for its withdrawal. However, the National Association of State Utility Consumer Advocates, the National Consumer Law Center, U.S. Public Interest Research Group, and Consumers Union filed a joint response in opposition to the petition’s withdrawal, arguing that a ruling from FCC would help clarify this issue and help resolve some pending lawsuits about it. FCC has not responded to CTIA–The Wireless Association’s notice or the consumer advocates’ joint response. Thus, this is another area that remains unresolved. In addition to the individual named above, Judy Guilliams-Tapia, Assistant Director; Eli Albagli; James Ashley; Scott Behen; Nancy Boardman; Bess Eisenstadt; Andrew Huddleston; Eric Hudson; Mitchell Karpman; Josh Ormond; George Quinn; Ophelia Robinson; Kelly Rubin; Andrew Stavisky; and Mindi Weisenbloom made key contributions to this report. | Americans increasingly rely on wireless phones, with 35 percent of households now primarily or solely using them. Under federal law, the Federal Communications Commission (FCC) is responsible for fostering a competitive wireless marketplace while ensuring that consumers are protected from harm. States also have authority to oversee some aspects of service. As requested, this report discusses consumers' satisfaction and problems with wireless phone service and FCC's and state utility commissions' efforts to oversee this service. To conduct this work, Government Accountability Office (GAO) surveyed 1,143 adult wireless phone users from a nationally representative, randomly selected sample; surveyed all state utility commissions; and interviewed and analyzed documents obtained from FCC and stakeholders representing consumers, state agencies and officials, and the industry. Based on a GAO survey of adult wireless phone users, an estimated 84 percent of users are very or somewhat satisfied with their wireless phone service. Stakeholders GAO interviewed cited billing, terms of the service contract, carriers' explanation of their service at the point of sale, call quality, and customer service as key aspects of wireless phone service with which consumers have experienced problems in recent years. The survey results indicate that most users are very or somewhat satisfied with each of these key aspects of service, but that the percentages of those very or somewhat dissatisfied with these aspects range from about 9 to 14 percent. GAO's survey results and analysis of FCC complaint data also indicate that some wireless phone service consumers have experienced problems with billing, certain contract terms, and customer service. While the percentages of dissatisfied users appear small, given the widespread use of wireless phones, these percentages represent millions of consumers. FCC receives tens of thousands of wireless consumer complaints each year and forwards them to carriers for response, but has conducted little other oversight of services provided by wireless phone service carriers because the agency has focused on promoting competition. However, GAO's survey results suggest that most wireless consumers with problems would not complain to FCC and many do not know where they could complain. FCC also lacks goals and measures that clearly identify the intended outcomes of its complaint processing efforts. Consequently, FCC cannot demonstrate the effectiveness of its efforts to process complaints. Additionally, without knowing to complain to FCC or what outcome to expect if they do, consumers with problems may be confused about where to get help and about what kind of help is available. FCC monitors wireless consumer complaints, but such efforts are limited. Lacking in-depth analysis of its consumer complaints, FCC may not be aware of emerging trends in consumer problems, if specific rules are being violated, or if additional rules are needed to protect consumers. FCC has rules regarding billing, but has conducted no enforcement of these rules as they apply to wireless carriers. This August, FCC sought public comment about ways to better protect and inform wireless consumers. In response to GAO's survey, most state commissions reported receiving and processing wireless phone service consumer complaints; however, fewer than half reported having rules that apply to wireless phone service. Stakeholders said that states' authority to regulate wireless service under federal law is unclear, leading, in some cases, to costly legal proceedings and reluctance in some states to provide oversight. FCC has provided some guidance on this issue but has not fully resolved disagreement over states' authority to regulate billing line items and fees charged for terminating service early. State commissions surveyed indicated that communication with FCC about wireless phone service oversight is infrequent. As such, FCC is missing opportunities to partner with state agencies in providing effective oversight and to share information on wireless phone service consumer concerns. |
The Kennedy Center opened in 1971 and is located on 17 acres along the Potomac River in Washington, D.C. The center houses numerous theater, exhibition, and rehearsal spaces; public halls; educational facilities; offices; and meeting rooms in about 1.1 million square feet of space. The plaza level is the primary focus for patrons and tourists, including three main theaters, the Grand Foyer, the Hall of States, and the Hall of Nations. Access to other areas, such as the roof terrace level, is provided through the Grand Foyer, Hall of States, and Hall of Nations. Figure 1 provides a diagram of the Kennedy Center's plaza level. The National Cultural Center Act of 1958 established the National Cultural Center as a bureau within the Smithsonian Institution and created a board responsible for constructing and administering the nation’s performing arts center. The John F. Kennedy Center Act of 1964 renamed the National Cultural Center as the John F. Kennedy Center for the Performing Arts. The Kennedy Center is also a nonprofit organization with the authority to solicit and accept gifts. In 1972, Congress authorized the National Park Service to provide maintenance, security, and other services necessary to maintain the building, while making the Kennedy Center Board of Trustees responsible for performing arts activities at the Kennedy Center. Under this arrangement, the Kennedy Center facility incurred a backlog of capital repairs, in part because responsibility for identifying and completing capital repairs and improvements at the center was unclear. Legislation was enacted in 1990 that directed the National Park Service and the Board of Trustees to enter into a cooperative agreement clarifying their responsibilities for the maintenance, repair, and alteration of the center, but the parties were unable to reach an agreement. In 1994, legislation was enacted that gave the Board of Trustees sole responsibility for carrying out capital improvements at the Kennedy Center. One purpose of the 1994 legislation was to provide autonomy for the overall management of the Kennedy Center, including better control over its capital projects, and to renovate the center. Under the Kennedy Center Act, the Kennedy Center Board of Trustees currently consists of 59 trustees: 23 are ex officio trustees, appointed by virtue of the office or position they hold, including congressional members, and 36 are general trustees appointed by the President of the United States. Each presidentially appointed trustee serves a term of 6 years. As the center’s chief decision-making body, the Kennedy Center Board of Trustees is responsible for maintaining the Kennedy Center as a living memorial to President John F. Kennedy and executing other functions required of the board under the act. The Kennedy Center Act requires the Board of Trustees to develop and annually update a CBP; plan, design and construct each capital project at the center; and prepare a budget. The board’s policies and procedures manual, which includes the board’s bylaws, states that board responsibilities include approving the annual CBP updates, reviewing management’s performance in implementing capital projects, and reviewing and approving the center’s annual capital project budget. The Kennedy Center Board of Trustees has a number of standing committees, including Executive, Audit, Finance, and Operations committees to assist with the board’s work. The Operations Committee is responsible for overseeing the general operations of the center, as well as all capital projects. Figure 2 shows the organization of the Kennedy Center, which includes the Board of Trustees and the center’s management structure as it applies to capital projects. As part of its responsibility under the 1994 legislation, the center published its first CBP in 1995, describing the goals of a long-term renovation effort, including addressing fire safety and disabled access code deficiencies, replacing inefficient building systems, and improving visitor services. This original building plan anticipated that the proposed capital projects would be completed in two stages. Projects in the first stage—fiscal years 1995 through 1999—would address critical security and life safety measures and improve accessibility. Projects undertaken in the second stage—fiscal years 2000 through 2009—would eliminate the backlog of deferred capital repair projects. In 1995, the Kennedy Center anticipated undertaking critical fire safety projects by the end of fiscal year 1999. However, to minimize disruption to performances, the Kennedy Center changed its approach to making capital improvements. Rather than undertaking broad- scale projects that could disrupt the entire center, the Kennedy Center chose to renovate the center incrementally while keeping the rest of the center open and operating. The center receives annual federal appropriations for capital projects based on the CBP and also for the operation, maintenance, and security of the facility. The funds appropriated for capital projects remain available to the Kennedy Center until they are expended. To implement its CBP, the Kennedy Center has received about $216 million since fiscal year 1995. This includes $35.3 million transferred from the National Park Service and the Smithsonian Institution and about $180.5 million in appropriated funds. In fiscal year 2006, the Kennedy Center received $13 million in federal funds for capital improvement projects and $17.8 million for the operation, maintenance, and security of the facility. According to a Kennedy Center official, this amount represents about 18 percent of the Kennedy Center’s anticipated fiscal year 2006 total operating expenses. The Kennedy Center generates the majority of its revenues from performances at the center, contributions, and investments. The center’s federal appropriations are not used for performance-related expenses. The law governing facility construction or alteration at the Kennedy Center requires that the center be in compliance with nationally recognized model building codes and other applicable nationally recognized fire safety codes to the maximum extent feasible. As is the case for federal agencies, the Kennedy Center is the authority that makes the final determination on whether the center is complying with the fire safety code. The Kennedy Center policy on building codes states that, where feasible, the center will comply with the International Building Code (2003), International Fire Code (2003), and selected provisions of the National Fire Prevention Association Life Safety Code (NFPA 101) (2003). The John F. Kennedy Center Act Amendments of 1994 amended the Kennedy Center Act to designate the center as a federal entity for purposes of the Inspector General Act of 1978 (IG Act), as amended. The Kennedy Center Act states that only federally appropriated funds are subject to the requirements of a federal entity under the IG Act. The Kennedy Center Act authorizes the Smithsonian Institution OIG to audit and investigate activities of the Kennedy Center involving federal appropriated funds, on a reimbursable basis, if requested by the Board of Trustees. In July 2006, the Kennedy Center finalized an MOU with the Smithsonian OIG for audits of federal funds used for capital projects. Recently, we recommended that the Kennedy Center increase oversight of its management of federal funds, ensure the fire safety of the center, and better align its management of capital projects with best practices. While the Kennedy Center has fully or partially implemented all 12 of our recommendations, more work is needed to fully implement some recommendations in the areas of fire safety and management of capital projects. In particular, the Kennedy Center has taken steps to (1) address fire code deficiencies at the Millennium Stages, such as providing marked exit routes for occupants; (2) ensure that doors in key areas provide adequate separation from fire; (3) develop as-built drawings of the center; and (4) provide timely and accurate information about capital projects to stakeholders (see fig. 3). In April 2005, we reported that the Kennedy Center had limited external reviews of how it maintains assurance regarding appropriate management of federal funds. Specifically, we found that the costs of four federally funded Kennedy Center capital projects exceeded the original budgeted costs and that a lack of comprehensive policies and procedures limited the Kennedy Center’s ability to adequately manage federal funds. In addition, in April 2005, we reported that the Kennedy Center had not reported annually to Congress and the Office of Management and Budget (OMB) on its audit and investigative activities as required by the IG Act. To increase oversight of the Kennedy Center’s management of federal funds, we recommended in April 2005 that the Kennedy Center work with an independent federal government oversight organization, such as the Smithsonian Institution OIG, for audits of the center’s use of federal funds. In response to our recommendation, the Kennedy Center hired a nongovernmental organization in August 2005 to develop a risk assessment and audit plan to assist the Kennedy Center Board of Trustees in its oversight of the center’s management of federal funds. Specifically, the risk assessment and audit plan were created for the Facilities Management Office (FMO) and Project Management Office (PMO) because the Kennedy Center receives federal funds to support the activities of both offices. FMO manages operations, maintenance, and contracting, and PMO is one of three offices that conduct capital projects for the center. The risk assessment was designed to provide a summary of the center’s potential risks for facilities and project management, including the specific risks the center faces in its use of federal funds. One such risk identified for project management is that capital projects could incur cost and time overruns if the project budget or schedule does not sufficiently allow for contingencies. The nongovernmental organization also prepared an audit plan, which was based on the center’s potential facilities and project management risks. This audit plan was designed to address the key risk issues identified in the risk assessment and provide a strategy for another organization to review each risk. For example, to address the potential for cost and schedule overruns, an audit of the center’s project management process was proposed. On May 16, 2006, the Kennedy Center awarded a contract to a nongovernmental organization to implement the audit plans for FMO activities. A Kennedy Center official told us that the nongovernmental organization began the FMO audit work several days after the contract was awarded. On July 25, 2006, the Kennedy Center finalized an MOU for the Smithsonian Institution OIG to conduct audits, on a reimbursable basis, of PMO activities. Specifically, the Smithsonian Institution OIG will conduct two audits on aspects of the center’s capital project management process. The Smithsonian OIG expects that these two audits will take about 1 year. In addition, the Smithsonian Institution OIG will submit proposals to the Kennedy Center Board of Trustees for subsequent audit coverage. As we reported in April 2005, ongoing oversight of the center’s use of federal funds is necessary to maintain assurance that they are managed appropriately. Therefore, to ensure ongoing oversight of the center’s use of federal funds, it is important that the board examine and pursue future audit proposals from the Smithsonian OIG. In April 2005, we reported that the Kennedy Center did not appear to meet some fire safety code requirements. Specifically, we identified problems with the performance-based approach the center used to overcome a deficiency in the number of emergency exits at the center, and we identified other code deficiencies in the center that were not covered by the performance-based approach. First, we found that the Kennedy Center had not fully implemented the conditions associated with its performance- based approach, which included installing sprinklers at the Millennium Stages and developing and implementing a program to manage the storage of scenery, props, and other combustible materials. In addition, the Board of Trustees had not accepted and adopted the terms of the performance- based approach as described in fire code. Since these steps had not been taken, we concluded that the performance-based approach was not yet valid for satisfying fire code. We also found that the fire-modeling study, on which the Kennedy Center’s performance-based approach was based, had not undergone a peer review. Peer review of modeling studies is a common industry practice outlined in fire code. In addition, we concluded that a peer review was particularly important for the Kennedy Center because the center lacked sufficient on-staff expertise to adequately interpret and evaluate the modeling study, and the Kennedy Center’s fire safety decisions were not subject to external review. Other fire code deficiencies remained to be addressed. For example, we found that there were no fire-rated doors in some areas that contain key emergency systems, and the Millennium Stages did not have two different, marked exit routes for occupants or an integrated smoke control, sprinkler, and smoke detector system over the stage area, as required by fire code. As a result, we recommended that the Kennedy Center improve its compliance with applicable fire codes in a number of ways (see fig. 4). The Kennedy Center has implemented our recommendation to obtain a peer review of its fire-modeling study. In April 2005, we recommended that the Kennedy Center seek a peer review of its fire- modeling study of the Grand Foyer, Hall of States, and Hall of Nations to determine if the study could be substituted for certain prescriptive fire code solutions. In July 2005, the Kennedy Center initiated two separate peer reviews, one with GSA and another with a nongovernmental fire protection consultant firm. Both peer reviews provided comments and expressed concerns about the assumptions used in the center’s study. In response to the peer reviewers’ comments and concerns, the Kennedy Center improved and updated its fire-modeling study, which was finalized in March 2006. The revised, peer-reviewed modeling study concludes that patrons can exit the Kennedy Center before it becomes untenable provided that (1) fire protection from the Lower Gift Shop is provided and (2) exit signs are installed in the Grand Foyer, Hall of States, and Hall of Nations. The Kennedy Center Board of Trustees is the authority responsible for determining if a performance-based design meets its objectives, as described in fire code. The Chairman of the Kennedy Center Board of Trustees stated that after the updated study is finalized, the board will determine if it meets design objectives and then will formally accept the study and adopt its terms. Although the updated study was finalized in March 2006, the Board of Trustees has yet to formally accept the study and adopt its terms. The board’s approval of the assumptions and conditions of the updated study is the final step in fully implementing our recommendation. The Kennedy Center has implemented our recommendation to manage the storage of combustible materials. In April 2005, we recommended that the Kennedy Center meet the objectives of its performance-based study by developing and implementing a program to manage the storage of scenery, props, and other combustible materials. In April 2005, the Kennedy Center developed and implemented a policy to manage fuel load by limiting the storage of scenery, stage props, and other combustible materials. To implement this policy, a Kennedy Center official conducts compliance inspections using a fire and life-safety checklist prior to each new show. The Kennedy Center has taken steps to implement our recommendation to address the code deficiencies at the Millennium Stages. In April 2005, we recommended that the Kennedy Center, in accordance with fire code, install an integrated smoke control, sprinkler, and smoke detector system over each Millennium Stage area and provide two different, marked exit routes for occupants at each Millennium Stage. The Kennedy Center believes the Millennium Stages have sufficient fire protection systems in place based on the results of its performance model. The revised, peer-reviewed modeling study concludes that smoke exhaust and sprinkler protection are not needed for the Millennium Stages provided the conditions of the revised modeling study are met. The Kennedy Center plans to adequately separate the Lower Gift Shop and the plaza-level public spaces as part of its life-safety improvements by spring 2007. Second, a Kennedy Center official said that exit signage is temporarily installed to mark the interior exit path during Millennium Stage performances and that the center plans to begin installing exit signs on the external doors in the Grand Foyer by the end of September 2006 (see fig. 5). Once the two conditions of the revised modeling study have been met, the Kennedy Center will have fully implemented our recommendation. The Kennedy Center has taken some steps to implement our recommendation to ensure that doors in key areas provide adequate separation from fire. In April 2005, we recommended that the Kennedy Center comply with fire safety code by ensuring that fire-rated doors are installed in key areas to provide adequate separation from fire. In March and May 2006, the Kennedy Center had a fire protection inspector assess the fire rating of the doors in the fire pump room, Fire Command Center and Concert Hall exits. The fire protection inspector found that these doors needed some repairs in order to obtain the fire-rating label. In response, the Kennedy Center repaired the doors in the fire pump room and Fire Command Center and, therefore, the fire protection inspector was able to certify that these doors provide adequate separation from fire. In addition, the Kennedy Center is making the necessary repairs to the doors at the Concert Hall exits to ensure that they provide adequate separation from fire. A Kennedy Center official stated that he plans to have the Concert Hall exit doors repaired, inspected, and labeled as fire- rated by the end of December 2006. Once the Concert Hall exit doors have been repaired, inspected, and labeled as fire-rated, the Kennedy Center will have fully implemented our recommendation. In April 2005, we reported that although the Kennedy Center achieved its goal of renovating four key federally funded capital projects, costs exceeded budget estimates for each project. Project cost growth resulted from modifications made during the renovation process, in part, because the Kennedy Center lacked knowledge of the building’s site conditions. Modifications led to overtime charges paid to meet tight construction schedules. In addition, the center may have paid more than necessary by negotiating contract modification values after work was completed. A lack of comprehensive policies and procedures limited the Kennedy Center’s ability to adequately safeguard federal funds. In addition, the Kennedy Center did not always communicate timely or accurate information on project cost growth and schedule delays to its board or Congress. In April 2005, we made several recommendations to better align the Kennedy Center’s capital project management with best practices (see fig. 6). The Kennedy Center has implemented our recommendation to design and implement contract and project management policies and procedures in accordance with prescribed federal guidance. In January 2006, the Kennedy Center designed and implemented contract and project management policies and procedures to guide various activities related to the acquisition of goods and services for its capital improvements program. The contract and project policies and procedures were drawn from the FAR, which generally applies to federal contracting activities. We did not assess the effectiveness of these policies and procedures because they were recently implemented. The Kennedy Center has implemented our recommendation to control cost growth and schedule changes in the Family Theater by setting more flexible construction schedules and improving its management of contract modifications. In February 2006, the Kennedy Center implemented a contract and project management policy that requires contract modification values to be negotiated before work is completed. For this report, we performed a limited assessment of the center’s implementation of this policy based on a review of some Family Theater contract modifications. We found that the Family Theater was completed on schedule and with limited cost growth. In particular, contractors did not proceed with additional work until it was approved by the contracting officer, and overtime was not paid to accelerate the schedule. The Kennedy Center’s progress in setting more flexible schedules and improving its management of contract modifications on larger federally funded projects, such as the Eisenhower Theater, will better indicate whether the center can effectively control cost growth and schedule changes. The Kennedy Center estimates that the construction period for this project will be from spring 2007 through summer 2008. The Kennedy Center has implemented our recommendation to design and implement financial policies and procedures to strengthen financial management controls in several specific areas (see fig. 7). In January 2006, the Kennedy Center designed and implemented financial policies and procedures for activities funded by federal appropriations. The financial policies and procedures were drawn from various laws and regulations, including the FAR. Our analysis found that the Kennedy Center has implemented our recommendations to ensure that complete, up-to-date costs are recognized and used to prepare financial reports and that payments to other federal agencies are consistent with the Economy Act agreement. In addition, the Kennedy Center implemented a procedure to ensure that receipt information is recorded and compared with field inspection reports to verify the validity of invoices prior to payment. The Kennedy Center also developed and effectively implemented new policies and procedures to ensure that (1) invoices contain sufficient detail to support their accuracy and validity and (2) invoices match with inspection reports and previously paid invoices to prevent duplicate payments. A more detailed discussion of our analysis of the Kennedy Center’s implementation of its financial policies and procedures can be found in appendix II. The Kennedy Center has implemented our recommendation to establish and enforce a documents retention policy that allows for accountability of the center’s federal funds. In June 2006, the Kennedy Center established and enforced a documents retention policy and issued a procedures manual for federal and nonfederal documents based on guidance from several sources, including the Internal Revenue Service, NARA, and the Smithsonian Institution. In conjunction with this manual, the center developed and implemented a computerized system to assist in the storage, retrieval, and destruction of all records. The Kennedy Center has taken some steps to implement our recommendation to better develop as-built drawings and better track future changes to the center. In January 2006, the Kennedy Center created a project management policy that requires as-built drawings of any new construction improvements to the building. As-built drawings of the new construction will allow the center to better track future changes to these areas. However, this policy does not require the center to integrate the individual new construction as-built drawings into one master set of centerwide drawings nor does it require updating as-built drawings as additional changes to the center are made. A Kennedy Center official told us that the center agrees that as-built drawings of the entire center are needed to prevent costly unforeseen site conditions; however, assembling and updating a master set of as-built drawings is expensive and not a Kennedy Center priority. Nevertheless, since incomplete knowledge of site conditions has contributed to cost overruns in the past, it remains important for the Kennedy Center to start assembling and consistently updating a comprehensive set of as- built drawings of the entire center. The Kennedy Center has taken some steps to implement our recommendation to provide timely and accurate information about capital projects by detailing their budget, scope, and cost and providing to stakeholders an annual reconciliation of the status of all planned, delayed, eliminated, and actual projects. We found that the 2005 CBP is better than previous versions because it includes the details of, and explanations for, project budget changes since the 2004 CBP. The 2005 CBP also includes the actual and projected obligations for each capital project by fiscal year through 2008, the last year of the CBP. Figure 8 illustrates how the Kennedy Center’s 2005 CBP conveys actual and projected obligations for the Eisenhower Theater for this period. These actions are responsive to our recommendation that the Kennedy Center provide more timely and accurate information to Congress and the Board of Trustees on the status of all planned and actual projects. In some instances, we found that the 2005 CBP did not provide timely or accurate information about federally funded capital projects. First, the 2005 CBP does not include original budgets for several federally funded projects, which would be needed to compare actual costs with originally budgeted costs to identify project cost overruns. Without this information, the Kennedy Center Board of Trustees and Congress lack accurate information to monitor and evaluate whether federally funded capital projects have been implemented effectively and efficiently. Second, the 2005 CBP remains unnecessarily difficult to understand. Specifically, it describes its capital renovation efforts in two different sections of the report. The first section provides an assessment of the different parts of the center and makes recommendations for improvement, and the second section lays out specific CBP capital projects and budgets. However, there is no crosswalk between the recommendations and the federally funded capital projects, making it difficult to identify how, or if, each project addresses specific facility issues. For example, although the 2005 CBP establishes a project numbering system, it does not use the numbering system in the other sections of the report to link specific projects to the issues discussed, making it difficult to understand how each project addresses these issues. In addition, the center has not provided accurate or timely information to the Kennedy Center Board of Trustees or OMB about the cost of federally funded capital projects. The Kennedy Center sends monthly reports to OMB that provide detailed information, project by project, on budgets and schedules. However, we identified eight capital project budgets in the December 2005 OMB reports that do not match the capital project budgets in the most recent CBP, which was finalized in December 2005. For example, the most recent CBP shows that the total projected obligations for the Eisenhower Theater are about $15.8 million, whereas the OMB report lists the project budget at about $16.8 million. While one of these two budget figures may be accurate, it is impossible for stakeholders to know which is accurate because the publication date for both is the same, December 2005. The Kennedy Center’s 2005 CBP indicates that the center will need additional budget resources to complete the federally funded projects remaining in its CBP and that the terrace-level renovations will be deferred until after the CBP ends in 2008. However, the 2005 CBP fails to calculate the sum of individual project cost changes, making it difficult to determine the overall impact of these changes. We found that the total budget for the 22 planned, ongoing, or recently completed projects in the 2005 CBP has increased by $10 million, or 21 percent, since 2004, bringing the total cost of the remaining CBP projects to $58 million. The 2005 CBP indicates that the Kennedy Center completed the Family Theater renovation on schedule in 2005 with limited cost growth and plans to begin renovating the Eisenhower Theater in 2007. However, the 2005 CBP also indicates that Kennedy Center deferred most terrace-level renovations that had originally been planned for the CBP, including renovations to the Terrace Theater, Theater Lab, States and Nations Galleries, and Atrium. In addition, the Kennedy Center has acknowledged, since the 2005 CBP was issued, that more budget increases and project deferrals may be necessary before the CBP is scheduled to end in 2008. In accordance with our April 2005 recommendation, the 2005 CBP now reconciles the budget changes from the 2004 CBP to the 2005 CBP for the 22 planned, ongoing, or recently completed projects in the CBP, allowing readers to more easily track budget changes for individual projects. However, these new reconciliations fail to calculate the sum of individual project changes, making it difficult to determine their overall impact on the CBP budget through 2008. Our analysis shows that the cost of the remaining CBP has increased about $10 million, or 21 percent, since 2004, bringing the total cost of the remainder of the CBP from about $48 million to $58 million for fiscal years 2006 through 2008. Although the budgets for a number of projects have changed since the 2004 CBP, our analysis shows that the net increase of about $10 million was generally attributable to large increases in the following five projects: The Site Improvements project budget increased by $4.2 million, or 71 percent, from the 2004 CBP. The 2005 CBP indicated that the Kennedy Center needed additional funds to address unforeseen site conditions, construction problems, and outstanding contractor claims. Although not detailed in the 2005 CBP, outstanding contractor claims on the Site Improvements project may cost millions of dollars in federal funds to settle, according to a Kennedy Center official. The project includes improvements to the service tunnel, plaza, west fascia and safety railing, garage, and the streets surrounding the Kennedy Center. The Toilet Room Renovation project budget increased by $2.3 million, or 94 percent, from the 2004 CBP. The 2005 CBP indicated that the Kennedy Center had expected the project to proceed several years earlier than currently scheduled, the planned project costs have since escalated, and further increases are anticipated. This project will focus on upgrading the toilet rooms throughout the center, including finishes, equipment, and flooring of toilet rooms throughout the center. The Level A Back-of-House Renovations project budget increased by $2 million, or 97 percent, from the 2004 CBP. According to the 2005 CBP, the budget increased because work was rescheduled to coincide with related renovation efforts, such as theater renovations, and changed market conditions. This project’s goal is to renovate offices, training rooms, locker rooms, backstage areas, dressing rooms, wardrobe areas, and other miscellaneous nonpublic spaces. The Curtain Wall/Door Replacement project budget increased by $1.5 million, or 28 percent, from the 2004 CBP. The 2005 CBP indicated that the budget grew because of project deferrals, market conditions, and extremely high inflation in recent construction costs. This project will replace the acoustical glazing on the curtain (nonweight bearing) walls for the west plaza level, hall entrances, and roof terrace-level. The Hazardous Materials Abatement project budget increased by about $900,000, or 72 percent, from the 2004 CBP. The 2005 CBP indicated that the budget increase was due to the asbestos abatement associated with the Eisenhower Theater project. Originally, the Kennedy Center planned to leave the asbestos undisturbed and unabated, but later decided to remove it. The 2005 CBP indicates that the scope of the project increased when planning for the Eisenhower Theater revealed a greater need for abatement than was previously anticipated. According to the 2005 CBP, the Kennedy Center still plans to complete renovations to the major performance and public spaces located on the plaza level within the CBP time frame. The plaza level includes interpretive displays about John F. Kennedy and the Kennedy Center and consists primarily of four theaters—the Opera House, Concert Hall, Eisenhower Theater, and Family Theater (formerly the AFI Theater)—and of three main public spaces—the Grand Foyer, Hall of States, and Hall of Nations. We reported in 2005 that the Kennedy Center had completed renovations to the Concert Hall, Opera House, and plaza-level public spaces but that the projects all experienced cost growth because of management and construction problems. The 2005 CBP indicates that the conversion of the AFI Theater into the Family Theater was completed in 2005, within established time frames and with limited cost growth (see fig. 9). The cost of the project was about $9.1 million—an amount that included cost growth, due to change orders, that was within the amount allocated for contingency. With seating for 320 people, the Family Theater renovation project was smaller in scale than other theaters on the plaza level. Kennedy Center management officials said that the lower grade finishes of the AFI Theater reduced the need to retain acoustic integrity and allowed for more detailed investigations during the project’s design stage. These investigations likely limited the number and severity of unexpected site conditions, which contributed to cost overruns for the Concert Hall and Opera House renovations. We also found that the Kennedy Center was more careful in the way it handled contract modifications during the construction of the Family Theater, which may have contributed further to limiting cost overruns on the project. In addition, Kennedy Center management officials said that the use of a different contracting approach, called construction manager at risk (CMAR), helped the Kennedy Center complete the project within budget and on schedule. Under a CMAR arrangement, a construction manager is hired as a general contractor to provide services during project design and then take over construction as the general contractor. We do not believe that the use of CMAR had a significant impact on the final cost or timeliness of the Family Theater’s construction. This is because CMAR contractors increase the price of their bid to compensate for the additional risk they take on as part of the contract. In addition, we found that the Kennedy Center’s use of CMAR did not comply with the FAR. Specifically, the Kennedy Center did not obtain a required deviation from the FAR, and it authorized contractor work to begin on the Family Theater before establishing the guaranteed maximum price of the project. These actions undermined the Kennedy Center’s claim of compliance with the FAR. In addition, the center’s negotiation of prices after work had begun placed the government at increased risk of cost overruns. The last major project on the plaza level is the renovation of the Eisenhower Theater, which will address life safety concerns, upgrade finishes, and make the theater accessible to the disabled. With seating for a total of 1,100 people in three tiers, the Eisenhower Theater is larger than the former AFI Theater. The 2004 CBP indicated that the Eisenhower Theater renovation work would begin in fiscal year 2007, assuming adequate funding, a schedule that was reiterated in the 2005 CBP. The project’s budget was stable through the 2004 and 2005 CBPs at about $15.8 million. However, since December 2005, the project’s cost has increased $900,000, or 6 percent, because of what the Kennedy Center describes as escalating construction costs, among other things. The 2005 CBP also indicated that the Kennedy Center would need to calculate another cost estimate once the schematic design is completed. Continued cost growth may hamper the Kennedy Center’s ability to complete the Eisenhower Theater renovation within the CBP’s budget and time frame. The President of the Kennedy Center told the Board of Trustees’ Operations Committee in September 2005 that the Eisenhower Theater renovation was then in jeopardy because of funding concerns. Although the Kennedy Center’s original goal for the CBP was to completely renovate the Kennedy Center and meet all life safety and accessibility requirements by the end of 2008, we concluded in 2004 that it was unlikely that the Kennedy Center would be able to meet that goal because of increasing project costs and time lines. Nevertheless, the Kennedy Center indicated in 2004 that it still intended to complete the vast majority of the projects in the CBP. However, the 2005 CBP shows that the center now plans to defer most terrace-level renovations beyond the end of the CBP. The terrace level of the Kennedy Center sits above the plaza level and comprises the Terrace Theater, the Theater Lab, States and Nations Galleries, Atrium, and two restaurants. (See fig. 10). According to the 2005 CBP, sprinklers were extended into the Terrace Theater in 2005, and the remaining life safety deficiencies will be addressed as part of the Roof Terrace Life Safety Project. The 2005 CBP does not describe the Roof Terrace Life Safety Project; but allocates $4.5 million for the project and schedules the bulk of the work for fiscal year 2007. This budget and schedule, however, are likely to change, possibly slipping until after 2008. The budget for the Roof Terrace Life Safety Project was estimated in 2002, before the scope was set or any detailed planning or design work had been conducted. The 2005 CBP further notes that the Roof Terrace Life Safety Project may be deferred to ensure that the Eisenhower Theater renovation can continue on schedule. Given the recent cost growth in that project, deferral of the Roof Terrace Life Safety Project seems increasingly likely. The scope of the Roof Terrace Life Safety Project has still not been set, making it difficult to evaluate the feasibility or adequacy of the project. The 2005 CBP indicates that the project will extend sprinklers to the States and Nations Galleries, protect the Terrace Theater exit stairway from fire, and consider other projects unrelated to life safety during planning. Apart from the Roof Terrace Life Safety Project, the 2005 CBP indicated that the Kennedy Center is deferring or placing a low priority on other terrace-level projects. Specifically, the Kennedy Center is deferring the following projects: Terrace Theater renovation. Originally scheduled for fiscal year 2005, the major portions of the Terrace Theater renovation have been deferred beyond the time frame of the CBP, including associated renovations to the auditorium, lobby, hallways, backstage areas, and some technical systems. With seating for 513 people, the Terrace Theater is the largest performance space on the terrace level. It was opened in 1978 and is in need of renovation because it does not currently offer accessibility for the disabled throughout the auditorium and suffers from other deficiencies in acoustics and finishes. Theater Lab renovation. Originally scheduled for fiscal year 2001, renovations of the Theater Lab—a 398 seat theater—have mostly been deferred, including previous plans to address deficiencies related to disabled access, acoustics, and support spaces. Terrace-level public spaces. Originally scheduled for fiscal year 2000, repairs of known architectural and finish deficiencies throughout the terrace-level public spaces will not be done as part of the CBP. Affected spaces include the States and Nations galleries and the atrium. According to the 2005 CBP, the terrace-level public spaces suffer from a number of problems—including deteriorated floor tiles, poor accessibility to restrooms, inadequate circulation patterns, and muddled acoustics—that will only be considered as funding allows. Since the 2005 CBP was issued, the Kennedy Center has indicated that additional deferrals will be necessary. The President’s fiscal year 2007 budget provides $4.9 million less than was projected in the 2005 CBP. If this amount becomes the final budget, the Kennedy Center will further defer or reduce several projects whose costs have grown, including the Curtain/Wall Door Replacement, Toilet Room Renovation, and Level A Back-of-House renovation. In addition, the Eisenhower Theater renovation’s cost growth may necessitate additional deferrals to other projects. Because the CBP will not be completed in 2008 as planned, the Kennedy Center has hired a consulting firm to survey the Kennedy Center and recommend center upgrades that have not yet been completed, with a goal of extending the CBP’s implementation period to 2012. A Kennedy Center official told us that projects not completed by 2008 as planned will be included in this new building survey. Under the Kennedy Center Act, the Board of Trustees is responsible for developing and annually updating the CBP; planning, designing, and constructing capital projects; and preparing a budget. Consistent with the practices of other governing boards, the Kennedy Center Board of Trustees has delegated these responsibilities to management. Although a board can delegate responsibilities to management, it remains responsible for overseeing management’s work. We found that the Kennedy Center Board of Trustees provides limited oversight of its federally funded capital projects. Specifically, we found no evidence that the board, as required by its policies and procedures manual, approves the annual updates to the CBP, reviews management’s performance in implementing capital projects, or approves the annual capital project budget. Furthermore, the Kennedy Center Board of Trustees has provided limited oversight to ensure that its appropriated funds are used efficiently, effectively, and in compliance with applicable laws. Three factors limit the board’s oversight of federally funded capital projects. First, it lacks procedures on how to carry out its responsibilities for federal funds. Second, attendance at board and Operations Committee meetings has been low, and the Operations Committee has met infrequently and at irregular intervals. Third, the board does not receive information needed to evaluate whether federally funded capital projects have been implemented efficiently. According to the Kennedy Center Act, it is the board’s responsibility to develop and annually update the CBP, which serves as the center’s long- term capital planning document. The board has delegated the annual update of the CBP to the center’s PMO and a consulting firm. Although such delegation is not uncommon for a governing board, we found that the Kennedy Center provides limited oversight of its federally funded capital projects in a number of ways. Our analysis of board meeting minutes and information packets, from January 2000 through January 2006, and interviews with Kennedy Center officials revealed no evidence that the board reviews or approves CBP projects and budgets as it is responsible for doing, according to its policies and procedures manual. A center official told us that each trustee of the board does not receive the CBP: only trustees on the Operations Committee are in receipt of this document. The Chairman of the Board told us that because of its large size, the board rarely discusses policy issues at board meetings. Instead, the chairman stated, full board meetings are used as a forum for announcements about upcoming programs and events. Without an opportunity to review the CBP, the board cannot ensure that federally funded capital projects planned for construction at the center are in accordance with the requirements of the Kennedy Center Act and that the expenditure of federal funds is reasonable. Both the Kennedy Center Act and the board’s policies and procedures manual assign budgeting responsibilities to the board for appropriations. Under the act, the board is responsible for preparing a budget in accordance with specified federal statutes, and under the manual, the board is responsible for approving the Kennedy Center’s budget, which includes private and federal funds. The President of the Kennedy Center stated that management verbally presents the proposed federal appropriation request to the board, which is the federal portion of the center’s budget, for approval. In addition, the President of the Kennedy Center told us that the Operations Committee also approves the proposed federal appropriation request for the center’s capital projects and operations and maintenance. However, in our analysis of board and Operations Committee meeting minutes from January 1998 through September 2005, we found no evidence that the board or its Operations Committee had approved the center’s proposed federal appropriation request. For example, in the September 2005 Operations Committee meeting minutes, a Kennedy Center official reported to the trustees of the Operations Committee that the center’s fiscal year 2007 request for appropriations had already been submitted to OMB. In addition, the Chairman of the Board of Trustees and the Chairman of the Operations Committee told us that they were not certain if the board approved the federal portion of the center’s budget. However, although this report does not include a review of the center’s private funds, our analysis of board meeting minutes found that the Kennedy Center Board of Trustees approves the center’s budget for trust funds. In contrast, we found that the Smithsonian Institution Board of Regents— which also oversees federal appropriations for capital projects and trust funds, oversees several arts organizations, and has its board membership defined by legislation—does review and approve the federal portion of its budget. Our analysis of Smithsonian Institution Board of Regents meeting minutes found that this board first reviews the Smithsonian Institution’s budget request for appropriated funds. Next, the Board of Regents votes to approve the budget request before it is presented to OMB. Finally, the resolution made by the Board of Regents states that any changes made to this federal request for appropriated funds can be made only with the approval of the Board of Regents or the Executive Committee. Since the Smithsonian Institution Board of Regents has been authorized to plan and construct numerous facilities with federal funds, the board’s review and approval of the Smithsonian Institution’s federal budget request is important to ensure that the request for the money is consistent with its responsibilities under the law. According to the Kennedy Center board’s policies and procedures manual, it is the board’s responsibility to “formulate the organization’s policies and review management’s performance in achieving them,” as well as, “assist the Chairman in selecting, monitoring, appraising, advising, stimulating, and rewarding the President.” However, in our analysis of board and Operations Committee meeting minutes, as well as in interviews with members of management and board members, we found no evidence that the board formally evaluates management’s performance or monitors the president’s implementation of federally funded capital projects. In addition, the Operations Committee, which receives most of the information on capital projects, does not receive the information it needs to evaluate federally funded capital projects. For example, the Operations Committee does not receive key indicators, such as the original versus the actual budget and schedule, to determine if all federally funded capital projects are implemented on time and within budget. The President of the Kennedy Center told us that he annually writes his self-assessment and presents it to the Chairman of the Board and to the Personnel Committee. However, the President of the Kennedy Center and the Chairman of the Board told us that there are no formal evaluation criteria. In particular, there is no standard or goal to measure the president’s performance in implementing federally funded capital projects. In lieu of formal criteria, the Chairman of the Board told us that he uses his intuition to assess the president’s overall performance, including the president’s implementation of capital projects. In contrast, a Smithsonian Institution official said that the Smithsonian Institution Board of Regents evaluates the secretary’s performance annually. According to this official, the secretary, with the Board of Regents’ approval, sets goals each year that include the secretary’s ability to complete capital projects on time and within budget. At the end of the year, the Board of Regents rates the secretary’s performance, comparing outcomes with the secretary’s established goals. In addition, several experts on nonprofit boards noted that a formal, periodic, and comprehensive evaluation of a nonprofit organization’s chief executive is needed to ensure that the organization’s goals are reached. Furthermore, a routine evaluation of a chief executive’s work allows the board to see if its decisions are being properly executed by management. The board and chief executive need to agree on the purpose and process of a formal performance evaluation, including the primary criteria to be used for review, such as the chief executive’s annual goals and objectives for the organization. Like other organizations that receive federal funds, the Kennedy Center Board of Trustees must ensure that appropriated funds are used as productively as possible, achieve intended goals and objectives, and are spent in compliance with applicable laws. However, several factors limit the Kennedy Center Board of Trustees’ oversight for federally funded capital projects. The Kennedy Center Act generally describes the board’s responsibilities for capital projects, such as its duty to maintain the functionality of the center at current standards of life, safety, security, and accessibility. Although, the act is not specific about how the board is to carry out its responsibility for federally funded capital projects, it authorizes the board to create bylaws, rules, or regulations, as it deems necessary, to administer its responsibilities under the Kennedy Center Act. When the Kennedy Center Act was amended in 1994 to give the board sole responsibility for capital projects, the board used this authority to create the Operations Committee—a committee of the board—to help it carry out this responsibility. The board’s policies and procedures manual provides information on the board’s responsibilities, the center’s organizational structure, and performance activities. In addition, the board has created bylaws that describe the general duties of board members, officers of the board, and a certain number of committees of the board. However, neither the manual nor the bylaws describe how the board or its Operations Committee is to administer its responsibility under the Kennedy Center Act for federally funded capital projects. This lack of procedures hinders the board and its Operations Committee in assessing whether federal funds for capital projects have been spent efficiently, effectively, and legally. In addition, a board expert stated that committees need clear direction to perform well and to avoid confusion and conflict over their responsibilities and the amount of authority delegated to them. This board expert also stated that carefully written policies are needed to help avoid unnecessary confusion and conflicts. The board’s lack of procedures for carrying out its Kennedy Center Act responsibilities relating to federally funded capital projects has led to a number of different interpretations by Kennedy Center trustees and a management official on how the board accomplishes its responsibilities for federally funded capital projects and on the Operations Committee’s overall responsibility. For example, the board has created and relies on the Operations Committee to assist in the oversight of federal funds spent for capital projects. Although a Kennedy Center management official, a previous trustee, and a current trustee stated that the Operations Committee had some jurisdiction over capital projects, they did not agree on the committee’s responsibilities and how it accomplishes its responsibilities. The previous trustee stated that the responsibility of the Operations Committee is to keep Members of Congress that are trustees abreast of how capital projects are progressing. The current trustee stated that the Operations Committee’s responsibility is strictly one of oversight for capital projects and that any policies relating to capital projects are made by the board’s Executive Committee. The Kennedy Center management official stated that the Operations Committee’s responsibility is to make policies relating to capital projects and oversee the implementation of these policies. Furthermore, we reported in 1998 that the Operations Committee provided policy guidance, resolved the most serious issues requiring board input, and functioned as the eyes and ears of center operations. Since most of a board’s responsibilities are carried out at board and committee meetings, it is important for a board and its committees to hold meetings regularly and for board members to attend these meetings. However, we found that attendance at board and Operations Committee meetings has been low and that the Operations Committee has met infrequently and at irregular intervals. Low attendance rates and infrequent committee meetings limit the board’s ability to monitor and review management’s implementation of federally funded capital projects. Despite congressional and board efforts to develop more active trustees and increase attendance rates, trustee attendance rates at regularly scheduled board meetings have been low. In 1994, when Congress amended the Kennedy Center Act and gave the board sole responsibility for capital projects, it also reduced the term length of trustees from 10 to 6 years. Congress believed that the shorter term would result in the selection of trustees who would be more active members of the board. Despite this change, the percentage of trustees attending each board meeting from 1995 through 2005 has ranged between 29 and 58 percent. The Executive Committee first tried in 1997 to improve attendance at board meetings by reducing the annual number of meetings from four to three because of low attendance. Then in 2000, a private consulting firm, hired by the Kennedy Center, found that the board’s governance could be strengthened by creating mechanisms to ensure more balanced involvement from all trustees. The consulting firm recommended that the center use attendance at meetings as a requirement for retaining board membership. In response, the board instituted an attendance policy that requested trustees to attend a minimum of three full board or committee meetings annually to retain their trustee status. However, despite these efforts, attendance rates at board meetings have never been above 58 percent. Trustees told us that the volunteer nature of board membership and the geographic location of members’ residences have led to poor attendance rates. Another trustee said that many trustees see their appointments as “honorific” and that their main responsibilities are to make donations to and raise funds for the center. The trustee further stated that the majority of decisions are made by the president and not by the board. In analyzing attendance rates for meetings of the Smithsonian Institution Board of Regents, we found that from January 2000 through September 2005, the median attendance rate at board meetings was 69 percent and ranged from about 47 percent to 94 percent (i.e., one-half of these meetings had attendance rates above 69 percent). In contrast, from April 2000 through September 2005, the median attendance rate at regularly scheduled Kennedy Center Board of Trustees meetings was 49 percent and ranged from about 37 percent to 58 percent. In addition, we found that some Kennedy Center trustees send designees to represent them at board and committee meetings. Board experts with whom we spoke expressed different opinions about trustees sending designees to represent them at board and committee meetings. For example, some experts stated that sending a designee to a board or committee meeting is not conducive to board governance and is contrary to volunteerism. However, another expert told us that there could be situations in which the use of a designee would be appropriate, provided the designee’s responsibility and authority is clarified in the board’s bylaws. We found that it is unclear what responsibility and authority designees have for carrying out the board’s responsibilities under the Kennedy Center Act. Currently, neither the board’s policies and procedures manual nor its bylaws address designees’ responsibility or authority. We found that the Lincoln Center for the Performing Arts—which is also a performing arts organization with a governing board of comparable size—has defined the responsibility and authority of its Board of Directors and of designees in its bylaws. For example, the bylaws state that any Lincoln Center board member entitled to a vote at a meeting may appoint any other person to act as such member’s proxy in that member’s capacity. In addition, each designee’s authority shall be revocable at the pleasure of the member who appointed the designee, and the designee can serve no longer than 11 months from date of appointment unless otherwise stated by the member. The Operations Committee has also met infrequently, and attendance at its meetings has been low. In 1994, when the board was given responsibility for the center’s capital projects, the board created the Operations Committee to ensure the appropriate use of federal funds spent for capital projects at the Kennedy Center. However, the Operations Committee’s ability to ensure the appropriate use of federal funds has been hindered by the infrequent meetings and low trustee attendance rates. From 1995 to 1998, the Operations Committee met three times a year. However, from 1998 through 2005, the Operations Committee met inconsistently and infrequently, even though several federally funded capital projects were in progress at the Kennedy Center (see fig. 11). For example, during the most recent period without a committee meeting, which lasted about 12 months, center management obligated about $21 million for 13 federally funded capital projects. The Operations Committee Chairman told us that currently the committee meets twice a year and that this is sufficient to oversee capital projects. In addition, trustee attendance rates at Operations Committee meetings have been low. From January 1995 through April 2006, there were 18 Operations Committee meetings, of which attendance records were available for 13. Of these 13 meetings, 10 had attendance rates of 50 percent or less (see fig. 12). A former Operations Committee Chairman stated that, because the Operations Committee was composed of ex-officio congressional members, it was difficult to schedule a time when members could be present. In general, to measure if a capital project has been successfully implemented, a board or committee would need information on (1) the actual cost of the project versus the budgeted cost, (2) the actual schedule of the project versus the original schedule, and (3) if the project provided the benefits intended. Providing these types of information to the board pressures the project team to meet the established cost, schedule, and performance goals for the project. Although the Operations Committee receives some of this information on federally funded capital projects, we found that it lacks key information needed to ensure that the project team is implementing capital projects within cost and schedule goals. The Operations Committee is to meet twice a year, and its trustees receive an information packet before each meeting. In reviewing meeting packets for January 2000 through September 2005, we found that these packets generally included information on ongoing federally funded capital projects, such as the amount of previous, actual, and projected obligations for each capital project, by fiscal year, and a description for each capital project. This information is useful to understand current and future obligations for each project. However, the meeting packets did not include the baseline cost and schedule estimates that would indicate if an ongoing project is within the budget or on schedule. Without baseline cost and schedule estimates, the Operations Committee and subsequently the board cannot identify project cost growth or schedule changes. For example, our analysis found that during a January 2003 Operations Committee meeting, a Kennedy Center official stated that the site improvement project would not require more than $40 million in federal funds. However, in April 2005, an Operations Committee information packet indicated that the center had obligated approximately $49.8 million for the site improvement project. The most recent CBP states that the total anticipated federal portion of the project’s cost will be about $54.7 million, or about $15 million more in federal funds than center management officials told the Operations Committee in 2003. Although the Operations Committee had received information anticipating that additional funds would be needed for the site improvement project, it did not simultaneously receive information on the project’s original budgeted cost, which would have indicated the degree of cost growth on the project. Without information on original budgeted costs, the committee cannot hold management accountable for the successful implementation of capital projects paid for with federal funds. The President of the Kennedy Center stated that the Operations Committee does receive information on capital projects that enables it to compare actual costs with budgeted costs and schedules. However, our analysis of Operations Committee meeting packets from January 1998 through September 2005 found no indication that Operations Committee members received information for comparing the actual cost with the original budgeted cost. Additionally, although the center’s most recent Operations Committee packet, dated April 2006, contains the original budgeted costs for 6 capital projects, this information is missing for the remaining 17 capital projects listed. Thus, although this packet provides improved budget information to stakeholders, it does not allow trustees to monitor the implementation of all federally funded capital projects. In addition, when we spoke with the Operations Committee Chairman about the board’s use of budgeted versus actual information, he stated that he leaves it up to management to ensure that costs are within established budgets. As we have reported previously, the implementation of a capital project’s success is determined primarily by whether the project was completed on schedule, within budget, and provided the benefits intended. Without this information the Operations Committee is unable to assist the board in its oversight of the federal funds spent for capital projects. For example, in April 2005, we reported that since 2003, each of the three federally funded capital projects that we reviewed had experienced cost overruns, one as great as 50 percent (see fig. 13). However, we did not find any evidence that the board or its Operations Committee was informed of these cost overruns, such as the Opera House’s $4 million cost increase. For example, two trustees that served on the board during the implementation of these projects told us that they did not know of any capital projects that had cost overruns. One of these trustees said that the Opera House renovation was on budget. In addition to the information packets, the Operations Committee receives the center’s annual CBP. Previously, we mentioned that the center’s 2005 CBP is better than previous versions because it includes the details of, and explanations for, project budget changes since the 2004 CBP. The 2005 CBP also includes the actual and projected obligations for each capital project, by fiscal year, through the end of the CBP in 2008. However, the 2005 CBP does not provide original project budgets for all capital projects. Therefore there is no way to quantify how well the project’s implementation matched the center’s original comprehensive plan. In addition, we found that trustees of the Operations Committees did not receive the most recent CBP in a timely manner. For example, the Operations Committee received the most recent CBP in January 2006, about 4 months after the center’s fiscal year 2007 budget was submitted to OMB. Without an opportunity to review the CBP before the budget is submitted to OMB, the committee cannot ensure that federally funded capital projects planned for construction at the center are authorized by the Kennedy Center Act. We have made numerous recommendations to the Kennedy Center within the past 9 years to improve its use and oversight of the federal funds that it receives, and the Kennedy Center has made significant improvements. Specifically, the Kennedy Center has made considerable progress over the past year in implementing our recommendations to improve fire safety and project management and to better align its activities with capital project best practices. Nevertheless, although some of our recommendations have not been fully implemented, it is critical that the Kennedy Center fully implement our recommendations and ensure that these changes become permanent. Changes to the Kennedy Center’s contracting practices and CBP provide good illustrations of the progress that the Kennedy Center has made and the work that remains. Since 1993, when we first began reporting on its capital improvement plan, the Kennedy Center has made a number of important improvements to its contracting management practices and to the CBP. For example, in 2005, the Kennedy Center added project-by- project reconciliations to the CBP as recommended in order to illustrate changes in project budgets and schedules over time. However, the Kennedy Center placed federal funds at risk by not fully complying with the FAR, and the CBP does not yet fully disclose the overall financial impact of project changes. During a time when the Kennedy Center is deferring many of its terrace-level renovations, the price of the CBP’s implementation is growing because of steep increases in the costs of some of the remaining projects. While many of the key facts are included in the 2005 CBP, putting the whole picture together requires gathering and analyzing information from previous and current versions of the CBP to ascertain how budget changes to individual projects affect the overall CBP budget through 2008. Much of our Kennedy Center work has found insufficient oversight for federally funded capital projects. Although the board has delegated much of the day-to-day work of running the center to the center management, the board retains ultimate responsibility for safeguarding funds and holding the center management accountable for its actions. Yet the board is providing limited oversight of its federal funds spent on capital projects; it does not approve CBP updates, does not review management’s performance in implementing capital projects in a structured way, and does not meet regularly. It may be telling that the board provides more oversight of its nonappropriated funds, including programming revenue and investment income. More detailed, transparent, and timely information on how federal funds have been budgeted and spent would allow the board to hold center managers accountable for completing federally funded capital projects on time and within budget estimates. 1. To improve compliance with the FAR, the Chairman of the Board of Trustees should direct the President of the Kennedy Center to properly obtain the required FAR deviation when using the construction manager at risk contracting method. In addition, the Kennedy Center should establish the guaranteed maximum government price for a capital project before proceeding with construction. 2. To improve the information the Kennedy Center provides to Congress and the Board of Trustees, the Chairman of the Board of Trustees should direct the President of the Kennedy Center to improve the Comprehensive Building Plan by taking the following two actions: Clearly identify the overall impact that changes to individual project budgets from the previous year will have on the overall plan’s budget. Clarify which federally funded projects the Kennedy Center intends to complete as part of the plan and which ones will be deferred. In doing so, establish clear scope and budget estimates for the Roof Terrace Life Safety project for the 2006 update of the Comprehensive Building Plan. 3. To strengthen the Kennedy Center Board of Trustees’ role in overseeing federally funded capital projects and to improve the board’s ability to carry out its responsibilities under the Kennedy Center Act, we recommend that the Chairman and Trustees of the Board take the following two actions: Develop and implement procedures on how the board and its Operations Committee are to carry out their duties under the Kennedy Center Act and their responsibilities for overseeing federal funds, including a clarification of the roles and responsibilities of the Operations Committee; Ensure that the board receives detailed, transparent, and timely information on how federal funds for capital projects have been budgeted and spent on capital projects, such as information on original versus actual project budgets and schedules. We provided a draft of this report to the Kennedy Center for its review and comment. The Kennedy Center provided written comments, which appear in appendix III, together with our responses. In general, the Kennedy Center agreed with the draft report’s findings and with two of the report’s three recommendations. The Kennedy Center agreed to (1) improve the CBP in several areas and (2) review and revise, if necessary, procedures on how the Operations Committee is to carry out its responsibilities and to provide the CBP to the Operations Committee in a more timely fashion. The Kennedy Center disagreed with our recommendation to better comply with a provision of the FAR and establish the guaranteed maximum government price for a capital project before proceeding with construction. However, based on our discussions with a GSA official, we are retaining the recommendation. The CMAR contracting method is not covered by the FAR and, consequently, requires a deviation. We also believe that the Kennedy Center could have limited the government’s risk of cost overruns by establishing the guaranteed maximum price for the project before authorizing the contractor to begin. The Kennedy Center provided technical comments and clarifications, which we have incorporated as appropriate throughout this report. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to interested congressional committees, the Chairman of the Kennedy Center Board of Trustees, and the President of the Kennedy Center. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. See appendix IV for a list of the major contributors to this report. This report responds to your request that we conduct a study on The John F. Kennedy Center for the Performing Arts’ (Kennedy Center) management and oversight of federal funds spent for Kennedy Center projects. Our objectives were to determine (1) the progress the Kennedy Center has made in implementing the recommendations in our April 2005 report, (2) the status of capital projects and the planned spending of federal funds for capital projects as indicated by the Kennedy Center’s most recent comprehensive building plan, and (3) the Kennedy Center Board of Trustees’ responsibilities for federally funded capital projects and the extent to which the board fulfills these responsibilities. To determine the progress Kennedy Center has made in implementing the recommendations in our April 2005 report, we interviewed Kennedy Center management officials and reviewed Kennedy Center documents (see fig. 14 for a list of our April 2005 recommendations). Specifically, to assess the steps taken to implement the first of these recommendations—that the Chairman of the Kennedy Center Board of Trustees increase oversight of its management of federal funds by working with an independent federal government oversight organization—we analyzed the Kennedy Center’s risk assessment and internal audit plan for the ongoing oversight of the center’s use of federal funds. In addition, we interviewed Kennedy Center management officials to determine how the Kennedy Center intends to implement its internal audit plan. We also reviewed the John F. Kennedy Center Act, as amended, and the Inspector General Act of 1978, as amended. To assess the steps taken by the Chairman of the Kennedy Center Board of Trustees to implement our April 2005 recommendations on fire safety, we analyzed the two peer reviews of the Kennedy Center’s fire-modeling study used as a substitute for prescriptive code solutions that were conducted by the General Services Administration (GSA) and a nonfederal entity. Additionally, we interviewed Kennedy Center management and GSA officials to determine the actions taken to implement recommendations from the two peer reviews. To further asses the steps taken to ensure fire safety, we reviewed the Kennedy Center’s policy and procedure for managing the storage of combustible materials; reviewed the Kennedy Center’s inventory of doors in key areas that needed to be fire rated; and toured the Kennedy Center to visually examine the exit signage installed at the Millennium Stages during a performance. We also interviewed a Kennedy Center management official to determine how the Kennedy Center implements its combustibles policy and procedure; the time frame for inspecting and installing fire-rated doors in key areas; and the installation of exit signs in the Grand Foyer at the Millennium Stages. To assess the steps taken by the Chairman of the Kennedy Center Board of Trustees to implement our April 2005 recommendations on managing capital projects, we interviewed Kennedy Center management officials and analyzed Kennedy Center documents. Specifically, to determine how the Kennedy Center provides more timely and accurate information about capital projects to stakeholders, we reviewed the Kennedy Center’s 2004 and 2005 comprehensive building plans (CBP); fiscal year 2007 budget justification to Congress; information packets and minutes from Board of Trustees and Operations Committee meetings (for 2004 through 2006); and the monthly reports the Kennedy Center sends to the Office of Management and Budget (OMB) that provide information on capital projects. To evaluate the steps taken by the Kennedy Center to control cost growth and schedule changes in future capital projects, we conducted a limited assessment of Family Theater contract modifications. The contract modifications we reviewed each had cost changes over $15,000 and in total represented about 58 percent of all cost changes for this project. We examined these contract modifications to evaluate whether the contractors’ proposals were fair and reasonable; the Kennedy Center had established the scope and cost for the modifications before directing the contractor to proceed with the work; and the Kennedy Center paid overtime to accelerate the project’s schedule. To obtain information on the construction manager at risk (CMAR) method of delivery, we reviewed the Federal Acquisition Regulation (FAR), GSA policy, and industry standards. We contacted GSA officials and an industry official. To assess the steps taken by the Kennedy Center to strengthen financial management controls, we analyzed the Kennedy Center’s contract, financial, and project management policies and procedures as they relate to the recommendations in our April 2005 report. In addition, we discussed with Kennedy Center management officials the time frame for implementation and the federal guidance used for the development of its contract, financial, and project management policies and procedures. While we did not assess the implementation of the Kennedy Center’s contract and project management policies and procedures because they were recently implemented, we were able to assess the adequacy of the Kennedy Center’s financial policies and procedures that relate to our specific financial recommendations. We reviewed the financial policies and procedures and spoke with management officials to verify that the policies and procedures contained guidance to address our recommendations. Specifically, we reviewed receipt information contained on recent invoice certification forms, and we also reviewed invoices paid from the Project Management Office contractor files where we had noted exceptions in our April 2005 report. To determine the status of the Kennedy Center’s document retention policy, we interviewed Kennedy Center management officials to discuss the steps taken to establish and enforce a policy. In addition, we spoke with National Archives and Records Administration (NARA) officials about the requirements for a federal records management policy, and we reviewed legislation and regulations relating to NARA. To determine the status of the Kennedy Center’s steps to better track future changes to the center, we spoke with Kennedy Center management officials and reviewed the center’s project management policy that addresses as-built plans. To determine the status of capital projects and the planned spending of federal funds for capital projects as indicated by the Kennedy Center’s most recent CBP, we reviewed the Kennedy Center’s initial 1995 CBP and its 2004 and 2005 updates to the plan. Specifically, we examined the changes in the 2005 CBP made since the 2004 CBP and developed a list of projects the Kennedy Center plans to delay or defer and the additional funds needed. To determine the accuracy of some of the data in the 2005 CBP, we reviewed the monthly reports the Kennedy Center sends to OMB; the Kennedy Center’s fiscal year 2007 budget justification to Congress; and the President’s fiscal year 2007 budget for the Kennedy Center. In addition, we spoke with Kennedy Center management officials to obtain justifications for the projects it intends to delay or defer. To determine the Kennedy Center Board of Trustees’ responsibilities for federally funded capital projects and the extent to which the board fulfills these responsibilities, we analyzed Board of Trustees and Operations Committee documents; appropriation laws; the John F. Kennedy Center Act, as amended; the CBP and its various updates; and the monthly reports the Kennedy Center sends to OMB. Specifically, to examine the extent to which the board fulfills its responsibilities for federally funded capital projects we reviewed Board of Trustees and Committee information packets and meeting minutes from January 1995 through April 2006 to determine how a variety of capital projects were overseen by the board. These projects included, but are not limited to, the Opera House renovation, fire alarm system replacement, public space modifications, site improvements project, and Family Theater. In addition, we interviewed current and previous trustees from the board and its Operations and Executive Committees. We spoke with two previous and four current trustees about the board’s responsibilities for overseeing federal funds, including the types of information used to make decisions on capital projects. In addition, we spoke with congressional staffers that are designees for two Kennedy Center Trustees and the President of the Kennedy Center. We selected trustees for interviews by first constructing a list of trustees that served on the board from 1995 to 2006. We choose 1995 because it was the year that Congress transferred responsibility for capital projects from the National Park Service to the Kennedy Center Board of Trustees. From this list, we selected previous and current trustees of the board and its Operations and Executive Committees who were either chairpersons or had not missed more than one board meeting per year of their tenure. In addition, we interviewed Kennedy Center executives to understand management’s role in the supervision of capital project costs and schedules and management’s responsibilities to the Board of Trustees. We calculated Board of Trustees and Operations Committee attendance rates and meeting frequencies using the information packets, which included meeting minutes, sent to trustees before scheduled board and committee meetings from January 1995 through April 2006. For the Board of Trustees meetings, we calculated attendance rates by comparing the number of trustees present at each regularly scheduled meeting with the total number of trustees designated under the Kennedy Center Act. For Operations Committee meetings, we calculated attendance rates by comparing the number of trustees present at each meeting with the information packet distribution list and the board’s policies and procedures manual. We calculated Operations Committee meeting attendance rates for the 13 of 18 meetings for which we had distribution lists. To ensure that the Kennedy Center provided us with all of the Board of Trustees and Operations Committee information packets and meeting minutes from January 1995 through April 2006, we cross referenced the materials we received against a list that was verified for a previous GAO report and against the Board of Trustees’ annual list of scheduled meetings. We are confident that we have accounted for all Board of Trustees and Operations Committee information packets and meeting minutes from 1995 through 2006 and therefore believe that the attendance rate and meeting frequency data are reliable for the purposes of this report. To calculate attendance rates for the Smithsonian Institution Board of Regents, from January 2000 to September 2005, we compared the number of regents present and absent at each regularly scheduled board meeting with the total number of regents set forth in law relating to the Smithsonian Institution. In some instances, we found that the number of regents attending and absent from meetings did not match the number of regents set forth in the law. For example, there were times when a vacancy on the board occurred while legislation to appoint a regent was pending. In these instances, we had Smithsonian Institution officials verify that during some board meetings, the number of regents attending and absent from these meetings did not match the number of regents set forth in law. Therefore, we believe that the attendance rate data are reliable for the purposes of this report. To obtain information on board governance practices, we interviewed academics, board organizations, and officials of other arts organizations. We reviewed relevant articles on board governance to select the academic and board organizations. To obtain information on how other boards govern, including their responsibilities for overseeing capital projects, we interviewed officials and reviewed documents from other arts organizations, including the Lincoln Center for the Performing Arts, the Los Angeles Music Center, the National Gallery of Art, and the Smithsonian Institution. We selected these organizations because they all have some features in common with the Kennedy Center, including authorizing legislation; capital projects; board member composition; organizational mission; and federal funding. We conducted our work in Los Angeles, California; New York City, New York and Washington, D.C., between October 2005 and August 2006 in accordance with generally accepted government auditing standards. In April 2005, we recommended that the Kennedy Center strengthen financial management controls by designing and implementing financial policies and procedures in accordance with prescribed federal guidance. Specifically, we recommended that the financial policies and procedures address several areas, as detailed in figure 15. In January 2006, the Kennedy Center designed and implemented financial policies and procedures for activities funded by federal appropriations. The financial policies and procedures were drawn from various laws and regulations including the FAR. As shown in figure 2 and as discussed in the remainder of this appendix, the Kennedy Center has fully implemented our financial recommendations in several specific areas. As figure 15 indicates, we recommended that the Kennedy Center recognize and use complete up-to-date costs for construction and other services to prepare financial reports and manage project costs. In response to our recommendation, the Kennedy Center now submits monthly progress reports on its obligations for services and capital federal expenditures to OMB. For each capital project, the reports contain the month’s anticipated, actual, and total up-to-date obligations made with federal funds. The center uses a cash basis to report costs in monthly financial reports on capital federal expenditures, which is acceptable to OMB. Therefore, this recommendation has been implemented. In our previous report, we recommended that for Economy Act transactions, payments to other federal agencies be for actual costs consistent with the Economy Act agreement. In response to our recommendation, the Kennedy Center established a policy that requires other federal agencies to clearly indicate that they are charging the center for actual costs incurred, under an Economy Act agreement. In addition, the policy includes an example of a letter that provides clear guidance to contracting staff on the language needed to ensure that an agency is charging the center for actual costs incurred. Therefore, this recommendation has been implemented. We recommended that financial policies and procedures ensure that receiving reports are prepared when goods or services are received to verify the validity of invoices. In response to our recommendation, the center’s procedures direct staff to enter receiving information on an invoice certification form—once the invoice is received—and to compare construction invoices to the architect’s field inspection reports. We examined recent invoice certification forms and found that they contained the project title and the period of performance. Each invoice certification form we reviewed also included a schedule of values detailing (1) the work being billed and (2) the percentage of work completed to date. The project title and the period of performance provide the information necessary to tie the construction services performed to the invoice and to Kennedy Center records. Therefore, this recommendation has been implemented. We recommended that financial policies and procedures ensure that invoices contain sufficient detail to support their accuracy and validity. In response to our recommendation, the center’s procedures direct staff to contact the vendor if additional detail is needed to support information on goods or services billed. We reviewed recent invoices and found that they now contain sufficient detail to support the accuracy and validity of the amounts invoiced. Therefore, this recommendation has been implemented. Finally, we recommended that the financial policies and procedures ensure that invoices are matched against inspection reports and previously paid invoices prior to payment to prevent duplicate payments. In response to our recommendation, the center’s procedures direct staff to consider whether goods and services have been billed on a previously approved invoice. This procedure also directs staff to look for any charges outside the period of billing and provides further detail instructing staff on data sources to consider in determining if payment has already been made. Therefore, this recommendation has been implemented. The following are GAO’s comments on the John F. Kennedy Center for the Performing Arts’ (Kennedy Center) letter dated August 24, 2006. 1. We disagree with this interpretation. Construction manager at risk is not covered by the FAR; and, consequently, a deviation must be authorized and justified in the contract file. We also continue to believe that establishing a Guaranteed Maximum Price (GMP) prior to proceeding with work limits the government’s risk of cost overruns. During the audit, we spoke with an official in the GSA Office of the Chief Architect who advises GSA staff on construction issues. The GSA official said that a GMP should be established in order to effectively use the construction manager at risk project delivery method and a deviation from the FAR is required. Because the Kennedy Center stated in its comments that it consulted with GSA and was told that it did not need a deviation for the contract, we reconfirmed the situation with GSA and were advised again that a deviation is required because of the use of the GMP. 2. We updated the report to indicate that the Kennedy Center and the Smithsonian Office of the Inspector General finalized a memorandum of understanding, in July 2006, that establishes audits of federal funds used for capital projects. In addition, we incorporated into the report, the Kennedy Center’s rationale for selecting a nongovernmental organization to audit the federal funds used for operations and maintenance activities. 3. The revised, peer-reviewed modeling study concludes that smoke exhaust and sprinkler protection are not needed on the Millennium Stages, provided that the conditions of the revised modeling study are met. Once the two conditions of the revised modeling study have been met, the Kennedy Center will have fully implemented our recommendation to install smoke exhaust and sprinkler protection at the Millennium Stages. 4. We disagree with the Kennedy Center’s approach to this recommendation, which is to assemble all existing as-built drawings into a single set. The Kennedy Center can accomplish our recommendation in a cost-efficient way by integrating the as-built drawings from each successive capital project into a master plan for the center and by updating the drawings as additional changes to the center are made. This would ensure that the Kennedy Center is tracking future changes to the center and using the most up-to-date drawings of site conditions. Our report notes that it is important for the Kennedy Center to start assembling and consistently updating a comprehensive set of as-built drawings of the entire center to prevent costly unexpected site conditions. 5. We agree that the difference between a project’s original budget and the final cost does not, by itself, necessarily indicate ineffective or inefficient management. However, we do believe that the Kennedy Center Board of Trustees and Congress need information on project cost overruns in order to monitor and evaluate whether federally funded capital projects have been implemented effectively and efficiently. 6. We believe our point is accurate and have further clarified the report to indicate that the cost of the Family Theater project was about $9.1 million, which includes cost growth, due to change orders, that was within the amount allocated for contingency. 7. The most recent Operations Committee meeting was held on April 2006; and as of this meeting, the Roof Terrace Life Safety Project scope was not developed. In addition, neither our review of Kennedy Center documents nor discussions with Kennedy Center officials indicated that the scope of the Roof Terrace Life Safety Project was developed. Although the Kennedy Center states that the scope of the Roof Terrace Life Safety project was developed in April 2006, they did not provide the details of this scope in their agency comments. 8. The Kennedy Center has deferred many of the terrace-level projects that were planned for the CBP beyond the scheduled completion of the plan in 2008, which significantly reduced the scope of the CBP. 9. See comment 6. 10. We agree that an ongoing capital plan is essential for the maintenance of the Kennedy Center as a presidential memorial and national performing arts center. The report notes that the Kennedy Center has hired a consulting firm to survey the center and recommend upgrades that have not been completed. This survey will cover years 2008 through 2012. We added into the report that a Kennedy Center official told us that the new survey will include projects listed in the CBP that were not completed in 2008 as planned, as well as new projects. Therefore, we consider the survey an extension of the original CBP. Although the extended CBP may include new projects to facilitate ongoing capital planning, it will include deferred projects that were originally scheduled to be completed in 2008. 11. We did not include designees’ attendance in our calculations of board and Operations Committee attendance rates for several reasons. First, membership on the Kennedy Center Board of Trustees is set forth in the Kennedy Center Act and does not include designees. Therefore, we based our calculations on the attendance records for those persons legally serving as trustees. In addition, since designees have no legal authority for making decisions, including those with respect to federally funded capital projects, we did not consider their participation in board and committee meetings. Lastly, in analyzing board meeting minutes, we found instances in which a trustee sent more than one designee to a meeting. In these cases, attendance rates would be inflated if designees were included in the attendance calculations. As noted in the report, we found that it is unclear what responsibility and authority designees have for carrying out the board’s responsibilities under the Kennedy Center Act, which limits the board’s oversight of federally funded capital projects. 12. We agree that a lack of attendance at board and committee meetings does not necessarily indicate that a trustee is not informed or engaged. However, during our audit we found, as noted in the report, that with respect to capital projects, trustees were not informed of project cost overruns, project budgets, or proposed projects. For example, as highlighted in the report, in April 2005, we reported that since 2003, each of the three federally funded capital projects that we reviewed had experienced cost overruns, one as great as 50 percent. However, we did not find any evidence that the board or its Operations Committee was informed of these cost overruns, such as the Opera House’s $4 million cost increase. For example, two trustees that served on the board during the implementation of these projects told us that they did not know of any capital projects that had cost overruns. One of these trustees said that the Opera House renovation was on budget. In addition to those named above, Michael Armes, Keith Cunningham, George Depaoli, Bess Eisenstadt, Craig Fischer, Brandon Haller, John Krump, Susan Michal-Smith, Josh Ormond, Julie T. Phillips, and Carrie Wilks made key contributions to this report. | In April 2005, GAO recommended that the John F. Kennedy Center for the Performing Arts (Kennedy Center) increase oversight of its management of federal funds, better comply with fire code, and conform to project management best practices. GAO was asked to evaluate (1) the progress the Kennedy Center has made in implementing GAO's April 2005 recommendations, (2) the status of federally funded capital projects and the planned spending of federal funds for capital projects as indicated by the Kennedy Center's most recent comprehensive building plan, and (3) the Kennedy Center Board of Trustees' responsibilities for federally funded capital projects and the extent to which the board fulfills these responsibilities. To fulfill these objectives, GAO examined Kennedy Center documents, visited other arts organizations, and interviewed affected parties. The Kennedy Center has taken steps to implement GAO's oversight, fire safety, and capital project recommendations but more work remains. For example, to increase oversight of its management of federal funds, the Kennedy Center contracted with the Smithsonian Institution Office of the Inspector General for audits of federal funds used for capital projects. In addition, to better comply with fire safety code, the Kennedy Center has implemented GAO's recommendations to obtain a peer review of its fire-modeling study and manage the storage of combustible materials. As a result of the peer review, the center made changes to its fire-modeling study. Finally, to better align with project management best practices, the Kennedy Center has implemented GAO's recommendations to design and implement contract, financial, and project management policies and procedures and control cost and schedule changes in future projects. The Kennedy Center's 2005 comprehensive building plan (CBP)--or longterm renovation effort--shows that the center will not complete its capital renovations within the planned 2008 time frame and budgets. The estimated costs for the remaining CBP projects have increased from $48 million to $58 million since the 2004 CBP, and the center plans to defer most terrace-level renovations beyond 2008, the original completion date. The 2005 CBP shows that the Family Theater was completed on schedule in 2005 with limited cost growth. However, despite improved contracting practices, GAO found that the Kennedy Center did not fully comply with the Federal Acquisition Regulation (FAR) when it used an alternative contracting method. In addition, it increased the risk of cost overruns by authorizing Family Theater work to begin before establishing the contract's guaranteed maximum price. The Kennedy Center Board of Trustees has delegated to management most of its responsibilities for federally funded capital projects, which is a typical board action. However, GAO found that several factors limit the board's oversight of federally funded capital projects. The Kennedy Center Board of Trustees and its Operations Committee (1) lack procedures on how to carry out the board's responsibilities for federally funded projects (2) have experienced low attendance at meetings, and (3) lack information needed to evaluate the implementation of capital projects. In addition, the Operations Committee has met infrequently, which further limits oversight. |
In the post–Cold War era, the proliferation of chemical and biological weapon technologies in developing countries presents DOD with a national security challenge. The 1997, 2001, and 2006 Quadrennial Defense Reviews as well as other DOD publications have emphasized the need to address the increasing threat posed by the proliferation of weapons of mass destruction, including chemical and biological weapons. The 2006 Quadrennial Defense Review specifically states that DOD’s vision is to organize, train, equip, and resource the future force to deal with all aspects of the threat posed by weapons of mass destruction. It notes that DOD has doubled its investment in chemical and biological defenses since 2001, and is increasing funding for its Chemical Biological Defense Program across the Future Years Defense Program by $2.1 billion (approximately 20 percent). However, experiences during the Persian Gulf War and the preparations for Operation Iraqi Freedom exposed weaknesses in the preparedness of U.S. forces to defend against a chemical or biological attack. In addition, we and DOD’s Inspector General have published reports addressing continued problems in aspects of DOD’s chemical and biological defense preparedness. Finally, at present there remain disagreements within DOD regarding the nature and extent of the chemical and biological threat and the degree to which major weapon systems should be survivable against such threats and capable of operating in a contaminated environment (see app. II). This lack of agreement could adversely affect DOD’s ability to develop and carry out a coherent plan to defend against chemical and biological threats. Until 2003, DOD’s acquisition procedures (unless waived) required that weapon systems survivability be addressed in accordance with assessed threat levels, including chemical and biological, anticipated in the weapon system’s projected operating environment. These procedures defined survivability as the capability of a weapon system and crew to avoid or withstand a man-made hostile environment without suffering an abortive impairment of its ability to accomplish its designated mission. The Army, Navy, and Air Force issued supplemental acquisition policies that established service-specific procedures to address the chemical and biological contamination survivability of their weapon systems. In 2003, DOD replaced its acquisition procedures with a Defense Acquisition Guidebook, which, together with the controlling DOD directive and instruction, no longer specifically requires that weapon system survivability against chemical and biological threats be addressed during the system design and development phase. According to a DOD official, this action was part of a DOD effort to simplify its weapon system acquisition process. The only current DOD acquisition requirement specifically related to chemical and biological threats is that weapon system program offices address protection for crew members (as opposed to the weapon system itself) against the effects of a chemical or biological threat. As part of weapon system design and development efforts, DOD uses scientific and technical information from research and testing activities to better understand various chemical and biological agents and their impact on military operations, including the survivability of weapon systems. DTIC maintains a centralized database containing a broad range of scientific and technical information intended to maximize the return on investment in research, evaluation, and studies. In addition to its centralized database, DTIC uses the Chemical and Biological Information Analysis Center (CBIAC), a contractor-operated information analysis center, to maintain additional databases and provide information specific to chemical and biological issues. DOD indicated in its August 2005 interim report that it intends to build on the existing databases maintained by CBIAC and to develop a centralized database by the end of fiscal year 2007 that contains comprehensive information on the effects of chemical and biological agents and decontaminants on weapon systems. In executing its role as a coordinating point for DOD scientific and technical information databases and systems, DTIC makes information available throughout DOD. Figure 1 illustrates the intended flow of information among testing facilities, program offices, and DTIC. DOD and the military services do not consistently address weapon system chemical and biological survivability during the acquisition process. In the absence of clear DOD guidance and effective controls, responsibility for decisions regarding weapon system chemical and biological survivability has devolved largely to the individual military services and weapon system program offices. The program offices we visited do not consistently document their chemical and biological survivability decisions, nor is there an established, clear, and effective DOD-level process for the oversight of these decisions. Although emphasis is placed on chemical and biological threats in DOD's strategic guidance, DOD and military service policies do not establish a clear process for considering and testing weapon system chemical and biological survivability. While DOD acquisition policies require that survivability of personnel after exposure to chemical and biological agents be addressed by all weapon system programs, they do not specifically require the consideration of weapon system survivability. There also are no DOD policies regarding the quantity and type of weapon system survivability testing that should be conducted. In addition, joint staff policies do not address or provide specific instruction as to how chemical and biological survivability should be considered during the acquisition process, or how this consideration should be monitored, reviewed, and documented. Each of the existing service acquisition policies is therefore unique and differs in the extent and amount of detail it requires for considering weapon system chemical and biological survivability. DOD acquisition officials told us that each weapon system service sponsor has the ability to decide whether and to what extent to incorporate survivability testing. Of the military services, the Army has the most detailed policy for addressing this. However, while emphasizing the need to monitor and review chemical and biological survivability issues in general, Army policies allow service sponsors and program offices to individually decide how and to what extent to consider weapon system survivability during the acquisition process. The Air Force and Navy have less detailed policies and also leave decision making to the weapon system sponsor and program office. Navy officials told us that, in their opinion, having less rigid requirements was advantageous because it reduces system development time and costs. The extent to which services consider weapon system survivability during the acquisition process is further influenced by differences in how each service perceives the chemical and biological threat and plans to conduct operations in a contaminated environment. The Army focuses on tactical and theater chemical and biological threats against exposed ground combat personnel and equipment. In comparison, the Air Force concept of operations in a contaminated environment is mainly a strategy of avoidance and protection, while the Navy view is that a chemical or biological attack on surface ships is a less likely threat. In the absence of DOD-wide policies and processes, DOD officials stated that the responsibility for determining the extent of chemical and biological survivability consideration or testing has fallen largely on the individual weapon system program offices, in consultation with each service sponsor. However, program offices also lack specific guidance and a clear process governing the extent to which chemical and biological survivability should be considered or tested. In our review of nine weapon system programs, we found that the program offices exercised broad discretion over whether or to what extent to evaluate the need for and benefit of conducting chemical and biological survivability testing. Although all nine of these program offices had conducted or were considering some kind of testing, we found that the extent and nature of this testing varied widely, even for similar types of systems. For example, the two sea-based weapon system program offices we reviewed considered chemical and biological testing differently, even though both systems are intended for similar operating environments. The program offices for the three land systems we reviewed also conducted very different tests from one another, although these systems also are intended for the same operating environment. Many factors affected the program offices' determination about the extent to test a weapon system's chemical and biological survivability, including the type of system (air, land, or sea), required system capabilities, system concept of operation, perceived chemical and biological threat, and other factors relating to the status of system cost, schedule, and performance. A more detailed discussion of the testing conducted for the nine weapon system programs we reviewed can be found in appendix II. The nine weapon system program offices we reviewed did not consistently document their decisions regarding how they considered or tested chemical and biological survivability. Although they could provide documentation regarding what survivability testing was conducted, they did not have a consistent method to track what was considered or was not included, because there is no DOD, joint, or service requirement for program offices to document these decisions. DOD officials stated that there is currently no DOD-level process for documenting how weapon system program offices determined whether to consider or test chemical and biological survivability. There is no effective DOD-level oversight of how chemical and biological survivability is considered by weapon system program offices. In 1993, Congress directed the Secretary of Defense to designate an office as the single DOD focal point for chemical and biological defense matters. DOD subsequently identified the Assistant to the Secretary of Defense for Nuclear and Chemical and Biological Defense Programs as the single DOD focal point for chemical and biological defense matters. However, the military services and various offices within DOD never adopted a consistent method for incorporating chemical and biological survivability and related testing into major weapon system development acquisition, including oversight responsibilities. Between 1994 and 2004, GAO and DOD Inspector General reports identified multiple management and oversight process problems regarding the incorporation of chemical and biological survivability into weapon system development. Various military service acquisition offices and DOD agencies, such as the U.S. Army Nuclear Chemical Agency, and the office of the Assistant to the Secretary of Defense for Nuclear and Chemical and Biological Defense, held differing views as to where this responsibility resided and how chemical and biological survivability should be incorporated into weapon system development. These differing views have hindered the development of an oversight process and prevented effective monitoring of weapon system program office decisions regarding chemical and biological survivability. Although the Office of the Assistant Secretary of Defense for Nuclear and Chemical and Biological Defense Programs directed the development and issuance of DOD's August 2005 interim report, DOD continues to lack a clear and effective department-level process for overseeing the inclusion of chemical and biological survivability in weapon system development. In addition, according to DOD officials, no single joint organization, such as the Joint Requirements Oversight Council or the Joint Requirements Office, specifically monitors or tracks whether weapon system chemical and biological survivability is considered in the weapon system acquisition process. There also is no specific chemical and biological survivability Functional Capabilities Board to review program office survivability decisions. DOD officials stated that these joint oversight organizations do not have a role in overseeing weapon system chemical and biological survivability and that consideration of survivability requirements during the acquisition process is therefore service-specific. Furthermore, because chemical and biological survivability is not usually a key performance parameter for a weapon system, it is often traded off to satisfy other pressing requirements dealing with the weapon system cost, schedule, or performance. DOD officials we spoke with acknowledged that program cost and schedule concerns could reduce the amount of chemical and biological weapon system survivability testing conducted. While the Milestone Decision Authority focuses on requirements associated with key performance parameters, none of the nine weapon systems we reviewed included chemical and biological survivability as a key performance factor. Only specific chemical and biological equipment-such as detection, protection, and decontamination equipment-have identified chemical and biological survivability as a key performance parameter. DOD, through DTIC, maintains a centralized database for science and technology information that could facilitate program offices' consideration of weapon system chemical and biological survivability, but the comprehensiveness of the survivability information in this database is unknown. We found it unlikely that this database is comprehensive for three reasons: (1) DOD policy is unclear as to whether chemical and biological information is covered by the policy, (2) no process has been established governing how information should be submitted to DTIC, and (3) no office or organization is responsible for overseeing that information is submitted to DTIC. It is unclear whether chemical and biological survivability information is covered by the broad DOD policy directing that scientific and technical information be submitted to DTIC. This policy requires that DTIC be provided with copies of DOD-sponsored scientific and technical information, but does not specifically address whether chemical and biological survivability information is included. Some DOD officials involved in chemical and biological survivability research and/or testing told us that they believed they were not required to submit the results of their work to DTIC. Further, there is no established process for submitting chemical and biological information to DTIC. As a result, individual personnel and organizations submit information to DTIC through ad hoc actions, and some DOD officials expressed concern that not all information is submitted to DTIC as required. Finally, no office or organization in DOD has been clearly designated as responsible for exercising oversight to ensure that chemical and biological research and testing results are submitted to DTIC. The DOD instruction addressing management of the collection of scientific and technical information assigns responsibility for submitting research and testing results to the DOD activities involved, but this instruction does not specifically indicate whether the activity sponsoring or approving the work or, alternatively, the organization performing it is responsible for its submission to DTIC. Officials at the DOD research and testing facilities we visited told us they routinely submitted the results of their work to DTIC, and we observed that DTIC and CBIAC were storing large amounts of this information. The two major DOD chemical and biological research and testing facilities we visited had an oversight process in place for ensuring that all research and testing projects submitted the required information to DTIC. However, responsibility for submitting this information was either left to individual research or testing staff, or was presumed to have been submitted to DTIC by the program offices requesting the work. DTIC officials stated that DTIC was not responsible for ensuring that DOD research and testing facilities submitted all research and testing results, and that DTIC had neither the authority nor the desire to do this. We could not identify any military service or program office level oversight for ensuring that research and testing results were submitted to DTIC, and some of the program offices we visited said the submission of research and test results to DTIC was not their responsibility. The absence of an internal control for ensuring that research and test results are submitted to DTIC and entered in DTIC's database could result in unnecessary expenditures on duplicative work. For example, if research or testing is performed regarding an aspect of survivability, but its results not entered in the DTIC database, officials in another program office interested in the same research or testing might fail to recognize it had already been performed and cause this work to be done again. The issues identified in previous DODIG and GAO reports regarding weapon system incorporation of chemical and biological survivability during the system acquisition process remain largely unresolved. Without DOD establishing consistent policy requiring that chemical and biological survivability be considered during weapon system acquisition and establishing a clear process for doing so, the incorporation of chemical and biological survivability into major weapons system acquisition is likely to remain varied and inconsistent. Consequently, military planners and commanders are likely to face varying weapon system performance, availability, and interoperability issues. This, in turn, could complicate the planning and execution of operations and increase the risk of mission failure, because systems that are not chemically or biologically survivable but become exposed to chemical or biological agents may not be available to a combatant commander for reuse in critical missions, such as deploying or supplying troops. Furthermore, without consistent documentation of program offices' rationales for trade-off decisions in their consideration of weapon system chemical and biological survivability, DOD's ability to identify and analyze associated risks could be hindered. Finally, the absence of a clearly defined DOD-level process for overseeing military service and program office actions limits DOD's ability to ensure that appropriate weapon system survivability decisions are being made. Without clarifying existing policies regarding which research and testing information should be submitted, the process to be used for submitting it, and which DOD offices or organizations are responsible for overseeing its submission, DTIC will likely be unable to ensure the maintenance of a centralized database containing comprehensive chemical and biological research and testing information. This could limit DOD's ability to efficiently and economically assess the effects of chemical and biological agent contamination on weapon system components and materials, and could result in duplicative research and testing, thus causing unnecessary design and development costs. To better ensure the incorporation of chemical and biological survivability into weapon systems, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to take the following six actions: Either modify current DOD policy or develop guidance to ensure that chemical and biological survivability is consistently addressed in the weapon system acquisition process. This policy or guidance should establish a clear process for program offices to follow regarding the extent to which chemical and biological system survivability should be considered and tested; require consistent, DOD-wide documentation of decisions regarding how weapon system chemical and biological survivability is considered and tested; and establish an oversight process within DOD and the services for monitoring weapon system program office decisions; modify current DOD policy to ensure that DOD's database of chemical and biological scientific and technical information is comprehensive. This modified policy should state which chemical and biological survivability information belongs in the body of scientific and technical information that is required to be submitted to DTIC; clarify responsibilities and establish a specific process for the submission of chemical and biological scientific and technical information to DTIC; and designate which DOD office or organization is responsible for exercising oversight to ensure that this information is submitted to DTIC. In commenting on a draft of this report, DOD concurred with all recommendations. Regarding our recommendations for either modifying current DOD policy or developing guidance to ensure that chemical and biological survivability is consistently addressed in the weapon system acquisition process, DOD plans to issue a Chemical Biological Contamination Survivability Policy by May 2006 and subsequently draft a DOD Directive addressing Chemical, Biological, Radiological, and Nuclear Survivability. With regard to our recommendations for modifying current DOD policy to ensure that DOD's database of chemical and biological scientific and technical information is comprehensive, DOD initiated the development of a chemical and biological material effects database by forming and hosting an executive steering committee that met for the first time in March 2006. DOD plans to establish and institute this database at the Chemical and Biological Defense Information and Analysis Center (CBIAC) managed by the Defense Technical Information Center (DTIC). The Assistant to the Secretary of Defense for Nuclear and Chemical and Biological Defense Programs is overseeing the development of this database, which DOD expects to be ready by the end of Fiscal Year 2007. DOD's comments are reprinted in appendix III. DOD also provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to the Secretaries of Defense, the Air Force, the Army, the Navy, and the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-5431 or dagostinod@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To assess the extent to which DOD addresses weapon system chemical and biological survivability during the acquisition process, we reviewed DOD, joint staff, and service policies, guidance, and procedures and interviewed officials throughout DOD. We also conducted a non probability sample of nine major weapon systems. We selected programs for this non probability sample based on several factors, including (1) high dollar value, (2) whether the weapon system was a joint program, and (3) risk of exposure to chemical and biological weapons. The methodology used to select our sample helped achieve a sample of weapon systems that was both diverse and relevant to chemical and biological survivability. For example, the sample includes weapon systems from all military services and all types of systems-land, sea, and air. The sample also includes both legacy systems and those currently in development. To understand how DOD's acquisition, testing, and data submission and storage policies affect weapon systems program offices' practices, we spoke with officials and examined documentation from the nine weapon system program offices we reviewed. The list of selected weapons systems is provided below: DD(X) Destroyer Stryker Infantry Carrier V-22 Osprey Vertical Lift Aircraft To determine the extent to which DOD maintains a comprehensive database for facilitating the inclusion of chemical and biological survivability in weapons system design and development, we reviewed DOD and service policies, guidance, and procedures. We compared these policies, guidance, and procedures to the objectives and fundamental concepts of internal controls defined in Standards for Internal Control in the Federal Government. We also conducted interviews with database officials and members of the chemical and biological testing community and reviewed documents at the following locations in consultation with DOD officials and identified as crucial to this subject area in previous GAO reports: Air Force Research Laboratory, Dayton, Ohio Army Research Laboratory, Survivability and Lethality Analysis Directorate, Aberdeen, Maryland Chemical and Biological Information Analysis Center, Edgewood, Defense Technical Information Center, Fort Belvoir, Virginia West Desert Test Center, Dugway Proving Ground, Utah Defense Threat Reduction Agency, Alexandria, Virginia We conducted our review from February 2005 through January 2006 in accordance with generally accepted government auditing standards. pon tem condcted teting t either the copon, component, or tem level. Wepon tem in deign phase. Specific procedre for the conider- tion of chemicnd ilogicsurvivability not developed. Concept of opertion preclde thi vehicle from operting in chemicl or iologicl contminted environment. We found that the extent and nature of chemical and biological survivability testing varied widely in all nine weapon systems we reviewed, even for similar types of systems. Both sea-based weapon systems we reviewed exhibited varying consideration of chemical and biological testing. For example, the Navy's Littoral Combat Ship (LCS) program office considered chemical and biological survivability testing low-risk due to the perceived operating environment and concept of operations for this weapon system. Officials stated that the key survivability approach will be to reduce susceptibility to contamination through detection and avoidance. In contrast, the Navy's next generation destroyer DD(X) was designed with a higher chemical and biological system protection level, and consequently the program office conducted limited coupon testing of specific materials found in the ship's superstructure. In its technical comments on this report, DOD stated that this occurred because the DD(X) concept of operations does not preclude exposure to chemical and biological attacks, while the LCS concept of operations does preclude exposure to chemical and biological agents. These systems thus utilized different concepts of operations although both are intended to operate in a littoral environment. DOD and program officials stated that land systems would be those most likely to include chemical and biological survivability testing because of the increased likelihood of encountering contamination on the modern battlefield. However, these programs also conducted tests very different from each other although they are intended for the same operating environment. The Marine Corps' Expeditionary Fighting Vehicle program office conducted four chemical and biological materials tests that looked at the effects of decontaminants on a variety of materials and included extensive tests using Chemical Agent Resistant Coating on the exterior and interior of the vehicle. In comparison, program officials from the Army's new wheeled personnel carrier, Stryker, used a different approach, focusing on applying a chemical agent simulant to a complete Stryker vehicle and then conducting decontamination procedures. However, in this case a different testing approach for a similar system may have been appropriate because the Stryker is not constructed with new materials and all existing materials used in constructing the Stryker meet military specification requirements for chemical and biological survivability. The Army's Future Combat System is currently reassessing chemical and biological survivability in its design and development. This program is still in development and has not reached the point where definitive decisions on chemical and biological survivability are applicable. The Army sponsor and the program office have been coordinating with the Joint Requirements Oversight Council, U.S. Army Nuclear and Chemical Agency, and the Army Training and Doctrine Command in creating chemical and biological survivability requirements. Of the four aircraft weapon system programs we sampled, three conducted similar levels of chemical and biological testing. Of the three current systems, the Air Force's F/A-22 Raptor and Joint Strike Fighter program offices conducted testing as extensive as that conducted by the Navy for the V-22 Osprey, although these two systems were assessed as much less likely to encounter chemical and biological contamination as the V-22 Osprey. The V-22 Osprey program office performed vulnerability assessments, survivability assessments, and some material coupon tests. Both the Air Force Joint Strike Fighter and F/A-22 Raptor program offices conducted complementary material and component contamination and decontamination compatibility tests. To identify material survivability issues, the F/A-22 Raptor program office contracted with a defense contractor to perform a literature search in advance of any testing. The Joint Strike Fighter program office effectively employed the results of this F/A-22 Raptor testing performed by using the survivability manual developed for the F/A-22 Raptor rather than developing its own. This manual was effectively used as a reference to meet both program's chemical and biological survivability and decontamination thresholds following exposure to chemical and biological weapons and decontamination procedures. The legacy aircraft system we reviewed, the C-17, conducted little chemical and biological testing because much of its testing and development occurred during a different threat environment. Program officials stated that decontamination procedures for the C-17 were developed in the 1980s and that the chemical and biological survivability requirements were drastically scaled down after the end of the Cold War. Many factors affected the program office's determination about the extent to test a weapon system's chemical and biological survivability. These factors included the type of system (i.e., air, land, or sea), required system capabilities, system concept of operation, the perceived chemical and biological threat, and other factors related to the status of system cost, schedule, and performance. Senior DOD officials stated that each service sponsor has the ability to choose whether to accept the risks related to cost and schedule to incorporate testing of chemical and biological survivability. DOD officials stated that in general land systems are perceived as the most likely to encounter chemical and biological contamination and that the perceived threat for sea and air systems has traditionally been considered lower than the perceived threat for land systems. This perception was based on old Cold War concepts and has since changed. DOD officials told us that asymmetric threats are a greater concern today and that system developers must weigh the threat context as they are developing systems and deciding what types of survivability to test based on perceived risk. Program offices we visited stated that the high financial cost of both live and simulated chemical and biological agent testing was a factor that influences decisions about testing weapon system chemical and biological survivability. For example, officials at the Expeditionary Fighting Vehicle program office estimated that coupon testing with live agents could cost approximately $30,000 to $50,000, and full system, live agent field testing of equipment at a facility such as the West Desert Test Center at Dugway Proving Grounds would cost approximately $1 million. In addition, the C-17 program office stated that live agent testing cost approximately $1 million. Interviews with various DOD research facilities where testing is conducted supported these amounts. F/A-22 program officials also stated that although they conducted coupon and component tests, they would not encourage a full system chemical and biological survivability test because such a test would be too expensive and would destroy the aircraft being tested. In addition to the contact named above, William Cawood, Assistant Director; Renee S. Brown, Jane Ervin, Catherine Humphries, David Mayfield, Renee McElveen, Anupama Patil, Matthew Sakrekoff, Rebecca Shea, and Cheryl Weissman also made key contributions to this report. | The possibility that an adversary may use chemical or biological weapons against U.S. forces makes it important for a weapon system to be able to survive such attacks. In the National Defense Authorization Act for Fiscal Year 2005, Congress mandated that the Department of Defense submit a plan to address weapon system chemical and biological survivability by February 28, 2005. This plan was to include developing a centralized database with information about the effects of chemical and biological agents on materials used in weapon systems. DOD did not submit its plan as mandated. GAO was asked to evaluate (1) the extent to which DOD addresses weapon system chemical and biological survivability during the acquisition process, and (2) DOD's internal controls for maintaining a comprehensive database that includes chemical and biological survivability research and test data for weapon system design and development. The extent to which chemical and biological survivability is considered in the weapon system acquisition process is mixed and varied. Although DOD strategic guidance and policy has emphasized the growing threat of an adversary's use of chemical and biological weapons for over a decade, DOD, joint, and military service weapon system acquisition policies are inconsistent and do not establish a clear process for considering and testing system chemical and biological survivability. To assess the extent DOD addresses chemical and biological survivability during the acquisition process, GAO conducted a non probability sample of nine major weapon systems based on high dollar value, whether the system was a joint program, and risk of exposure to chemical and biological weapons. Because DOD and joint acquisition policies do not require that survivability be specifically addressed, the military services have developed their own varying and unique policies. Thus, for the nine weapon systems GAO reviewed, the program offices involved made individual survivability decisions, resulting in inconsistent survivability consideration and testing. In the absence of DOD requirements, program offices also inconsistently document their decisions regarding how they consider and test chemical and biological survivability. Furthermore, DOD policies do not establish a clear process for responsibility, authority, and oversight for monitoring program office decisions regarding chemical and biological survivability. Without establishing consistent policies requiring that chemical and biological survivability be considered during weapon system acquisition, and a clear process for doing so, military planners and commanders are likely to face varying weapon system performance, availability, and interoperability issues. These could negatively affect system availability in a contaminated environment and limit DOD's ability to identify risk and ensure that appropriate decisions are made. DOD, through its Defense Technical Information Center (DTIC), maintains a centralized database for science and technology information that could facilitate program offices' consideration of weapon system chemical and biological survivability, but the comprehensiveness of this database is unknown due to inadequate internal controls. It is unlikely that the DTIC database contains fully comprehensive information about this for three reasons. First, it is unclear whether this information is covered by the broad DOD policy directing that scientific and technical information be submitted to DTIC. Second, there is no established process for submitting scientific and technical information to DTIC. As a result, it is submitted to DTIC through the ad hoc actions of individual personnel and organizations, and some DOD officials expressed concern that not all information is being submitted to DTIC. Third, no office or organization in DOD has been given clear oversight responsibility to ensure that information is submitted to DTIC. The lack of a database with comprehensive information about weapon system chemical and biological survivability creates the risk of unnecessary expenditures on duplicative testing. |
The F-35 Joint Strike Fighter program is a joint, multinational acquisition intended to develop and field an affordable, highly common family of next generation strike fighter aircraft for the United States Air Force, Navy, Marine Corps, and eight international partners. The F-35 is a single-seat, single-engine aircraft incorporating low-observable (stealth) technologies, defensive avionics, advanced sensor fusion, internal and external weapons, and advanced prognostic maintenance capability. There are three variants. The conventional takeoff and landing (CTOL) variant, designated the F-35A, will be an multi-role, stealthy strike aircraft replacement for the Air Force’s F-16 Falcon and the A-10 Thunderbolt II aircraft, and will complement the F-22A Raptor. The short takeoff and vertical landing (STOVL) variant, the F-35B, will be a multi-role stealthy strike fighter to replace the Marine Corps’ F/A-18C/D Hornet and AV-8B Harrier aircraft. The carrier-suitable variant (CV), the F-35C, will provide the Department of Navy a multi-role, stealthy strike aircraft to complement the F/A-18 E/F Super Hornet. Lockheed Martin is the aircraft contractor and Pratt & Whitney is the engine contractor. DOD began the Joint Strike Fighter program in October 2001 with a highly concurrent, aggressive acquisition strategy with substantial overlap between development, testing, and production. The program was rebaselined in 2004 following weight and performance problems and rebaselined again in 2007 because of additional cost growth and schedule slips. Following an extensive department-wide review, the Secretary of Defense in February 2010 announced a major restructuring of the program due to poor cost and schedule outcomes and continuing problems. DOD added time and money for development, provided additional resources for testing, and reduced the number of aircraft to be procured in the near-term. In March 2010, the department declared that the program exceeded critical cost growth thresholds established by statute—a condition known as a Nunn-McCurdy breach—and subsequently certified to the Congress in June 2010 that the F-35 program should continue. Due to the cost breach, the Under Secretary of Defense for Acquisition, Technology, and Logistics rescinded the program’s approval to enter system development and DOD began efforts to establish a new acquisition program baseline. The department continued restructuring actions during 2011 and 2012 that added more cost, extended schedules, and further reduced aircraft procurement quantities in the near-term. The quantity of F-35 aircraft to be procured in total was not changed, but restructured plans have deferred to future years the procurement of 410 aircraft originally planned to be procured through 2017 based on the 2007 revised baseline. Through the end of calendar year 2012, the contractor has delivered a total of 52 aircraft–14 test and 38 production aircraft. In March 2012, DOD established a new acquisition program baseline for the F-35 program that incorporated the numerous positive and more realistic restructuring actions taken since 2010. Officials also reauthorized continuation of system development, approved continuation of low rate initial procurement, divided the program for reporting purposes into aircraft and engine subprograms, and took other actions required due to the Nunn-McCurdy cost breach. The March 2012 baseline is the F-35’s fourth, including the original estimate at the start of development in October 2001. Table 1 shows changes in cost, quantity, and major schedules associated with each baseline and also a June 2010 interim estimate at the time of Nunn-McCurdy breach. The causes of cost growth and schedule delays from 2001 to 2012 are documented in past GAO reports (see appendix l and Related Products). The F-35 program made progress in 2012 on several fronts. The program met or substantially met most of its key management objectives established for the year. Also, development flight testing exceeded the planned number of flights by a good margin for 2012, but did not quite accomplish the planned number of test points.considerable progress in addressing significant technical risks needing resolution, such as the helmet mounted display. Furthermore, software management practices improved, but this area continued to require more time and effort than planned. While the F-35 program made progress in The program made 2012, the bulk of development testing and evaluation is ahead, is planned to continue into 2016, and is expected to identify additional deficiencies impacting aircraft design and performance. To date, slightly more than 11 percent of development contract performance specifications have been verified as met and the development flight test program has cumulatively accomplished just over one-third of the test points and test flights planned. The operational test community raised concerns about the F-35 readiness for training, development test plans and results, and the schedule and resources for starting initial operational testing in 2017. The F-35 program office annually establishes major management objectives it wants to achieve in the upcoming year. The program achieved 7 of the 10 primary objectives (70 percent) it established for 2012 and made substantial progress on one other. Table 2 summarizes the 2012 objectives and accomplishments. In addition to the 7 objectives met, the F-35 program substantially met one more–the block 3 critical design review was completed in late January 2013 following the preliminary design review in November 2012. The remaining two objectives that were not met: (1) the contractor delivered 30 production aircraft compared to the program goal of 40 and (2) its Earned Value Management System (EVMS) corrective action plan was not approved. EVMS compliance is a long-standing issue and concerns all Lockheed Martin aircraft produced for DOD, not just the F- 35. In 2007, the Defense Contract Management Agency, the agency responsible for auditing defense contractors’ systems, found Lockheed Martin’s process did not meet 19 of 32 required guidelines and, in October 2010, withdrew the determination of compliance. While acknowledging that Lockheed Martin has made improvements, DCMA in 2012 found the company still deficient on 13 guidelines. EVMS is an important, established tool for tracking costs, controlling schedule, identifying problems early, and providing accurate product status reports. DOD requires its use by major defense suppliers to facilitate good insight and oversight of the expenditure of government dollars. The F-35 development flight test program also substantially met 2012 expectations with some revisions to original plans. The program exceeded its planned number of flights by 18 percent, although it fell short of its plan in terms of test points flown by about 3 percent, suggesting that Test officials had to the flights flown were not as productive as expected.make several adjustments to plans during the year due to aircraft operating and performance limitations and late releases of software to test. As a result, none of the three variants completed all the 2012 baseline points as originally planned. However, the test team was able to add and complete some test points that had been planned for future years. In this manner, the program was actually able to accomplish more test points in total than planned. Figure 1 compares the total baseline flight test points accomplished in 2012 against the initial plan for each test vehicle. Results from flight testing in 2012 included the following: Aircraft dedicated to testing mission systems exceeded the number of planned flights and fell just short of accomplishing the total test points planned. Testing supported development of software providing training and initial warfighting capability as well as baseline signature testing. Overall progress in verifying and fielding enhanced capabilities was limited, largely because of late and incomplete software. The Navy’s F-35C carrier-suitable variant exceeded its number of planned flights and planned test points for 2012. Testing verified the basic flight envelope (demonstrating ranges of speed and altitude), flight with external weapons, and prepared the aircraft for simulated carrier landings. The program also accomplished shore-based tests of a redesigned arresting hook (the hook engages the landing wires on aircraft carriers). The Marine Corp’s F-35B short takeoff and vertical landing variant exceeded the number of flights and test points. It successfully completed the first weapons release, engine air start tests, fuel dump operations, expanded flight envelope with weapons loaded, and radar signature testing. It also tested re-designed air inlet doors in vertical lift operations. The Air Force’s F-35A conventional takeoff and landing variant accomplished high angle of attack testing, initial weapons separation, and engine air start. It also evaluated flying qualities with internal and external weapons, and expanded the envelope for airspeed and altitude. This variant did not accomplish as many flights as planned and fell short of planned test points by about 15 percent. Operating restrictions and deficiencies in the air refueling system were the main constraints. Flight, ground, and lab testing has identified significant technical and structural concerns that, if not addressed, would substantially degrade the F-35’s capabilities and mission effectiveness. The F-35 program made considerable progress in 2012 to address these major technical risks: The helmet mounted display (which provides flight data, targeting, and other sensor data to the pilot) is integral to the mission systems architecture, to reduce pilot workload, and to achieve the F-35’s concept of operations. The original helmet mounted display encountered significant technical deficiencies and did not meet warfighter requirements. The program is pursuing a dual path by developing a second, less capable helmet while working to fix the first helmet design. Both helmets are being evaluated and program and contractor officials told us that they have increased confidence that the helmet deficiencies will be fixed. DOD may make a decision as to which helmet to procure in 2013, but the selected helmet is not expected to be integrated into the baseline aircraft until 2015. The Autonomic Logistics Information System (ALIS) is an important tool to predict and diagnose maintenance and supply issues, automating logistics support processes and providing decision aids aimed at reducing life cycle sustainment costs and improving force readiness. ALIS is developed and fielded in increments. Limited capability ALIS systems are in use at training and testing locations. More capable versions are being developed and program and contractor officials told us that the program is on track to fix identified shortcomings and field the fully capable system in 2015. Limited progress was made in 2012 on developing a smaller, transportable version needed to support unit level deployments to operating locations. During 2012, the carrier variant Arresting Hook System was redesigned after the original hook was found to be deficient, which prevented active carrier trials. During shore-based tests, the program accomplished risk reduction testing of a redesigned hook point to inform this new design. The preliminary design review was conducted in August 2012 and the critical design review in February 2013. Flight testing of the redesigned system is slated for late 2013. Ground testing also made continued progress in 2012, including structural and durability testing to verify that all three variants can achieve expected life and identify life-limited parts. Over time, testing has discovered bulkhead and rib cracks. The program is testing some redesigned structures and planning other modifications. Officials plan to retrofit test and production aircraft already built and make changes to the production line for subsequent aircraft. Current projections show the aircraft and modifications remain within weight targets. The F-35 software development effort is one of the largest and most complex in DOD history. It is essential to achieve capabilities such as sensor fusion, weapons and fire control, maintenance diagnostics, and propulsion. Recent management actions to refocus software development activities and to implement improvement initiatives appear to be yielding benefits, but software will continue to be a very challenging and high risk undertaking for this program, especially for mission systems. Over time, software requirements have grown in size and complexity and the contractor has taken more time and effort than expected to write computer code, integrate it on aircraft and subsystems, conduct lab and flight tests to verify it works, and to correct defects found in testing. The aircraft contractor and F-35 program office have recently taken steps to improve software management and output. In addition to completing most work on the first major software block, other significant management actions should enhance future software and mission system outcomes. These actions include: starting up and operating a second system integration lab, adding substantial testing and development capability; prioritizing and focusing resources on the next block of software and decreasing concurrent work on multiple blocks; implementing improvement initiatives recommended by an independent software review; and evaluating the possible deferral of some capabilities, either to later blocks or moving them outside the current F-35 program to follow on development efforts. Our April 2011 report discussed the need for several of these actions. For instance, we recommended that DOD undertake an independent software review. Subsequently, an independent review was conducted and contractor software managers implemented several improvement initiatives recommended by that review. These are yielding benefits. For example, program officials reported that the time span to fix defects has decreased from180 days to 55 days, allowing the program to keep better pace even though the number of defects has increased. In addition, the time taken to build and release software to testing has decreased from 187 hours to 30 hours due to new automated processes. Contractor officials currently plan to broaden the assessment’s initiatives to other software development efforts, including logistics and training. Our 2011 report also discussed the need to reduce concurrent block work and to evaluate the possible deferral of the most advanced capabilities to future increments; program officials are actively pursuing these areas as discussed above. These recent management actions are positive and encouraging, but overall, software development activities in 2012 lagged behind plans. Most software code has been developed, but a substantial amount of integration and test work remains before the program can demonstrate full warfighting capability. Software capabilities are developed, tested and delivered in three major blocks and two increments—initial and final— within each block. The status of the three blocks is described below: Block 1.0, providing initial training capability, was largely completed in 2012, although some final development and testing will continue. Also, the capability delivered did not fully meet expected requirements relating to the helmet, ALIS, and instrument landing capabilities. Block 2.0, providing initial warfighting capabilities and limited weapons, fell behind due to integration challenges and the reallocation of resources to fix block 1.0 defects. The initial increment, block 2A, delivered late and was incomplete. Full release of the final increment, block 2B, has been delayed until November 2013 and won’t be complete until late 2015. The Marine Corps is requiring an operational flight clearance from the Naval Air Systems Command before it can declare an initial operational capability (IOC) for its F- 35B force. IOC is the target date each service establishes for fielding an initial combat capable force. Block 3.0 providing full warfighting capability, to include sensor fusion and additional weapons, is the capability required by the Navy and Air Force for declaring their respective IOC dates. Thus far, the program has made little progress on block 3.0 software. The program intends initial block 3.0 to enter flight test in 2013, which will be conducted concurrently with the final 15 months of block 2B flight tests. Delivery of final block 3.0 capability is intended to begin nearly 3 years of developmental flight tests in 2014. This is rated as one of the program’s highest risks because of its complexity. In particular, the development and testing of software-intensive mission systems are lagging, with the most challenging work ahead. About 12 percent of mission systems capabilities are validated at this time, up from 4 percent about 1 year ago. Progress on mission systems was limited by contractor delays in software delivery, limited capability in the software when delivered, and the need to fix problems and retest multiple software versions. Further development and integration of the most complex elements—sensor fusion and helmet mounted display—lie ahead. Sensor fusion integrates data from critical subsystems and displays the information to the pilot. Figure 2 depicts the percentage of sensor fusion work associated with each software block. About 36 percent of the sensor fusion work was completed in software block 1. Final verification and closure of remaining fusion requirements through block 3 will not be completed until 2016. The critical work to test and verify aircraft design and operational performance for the F-35 program is far from complete. Cumulatively since the start of developmental flight testing, the program has flown 2,595 of 7,727 planned flights (34 percent) and accomplished 20,495 of 59,585 test points (34 percent). For development testing as a whole, the program has verified 11.3 percent of the F-35 development contract specifications (349 of 3,094 specifications) through November 2012. Contract specifications include specific design parameters and operating requirements such as speed and range that the F-35 aircraft are expected to meet. Three-fourths of the total contract specifications cannot be fully evaluated, verified, and approved until the final increment of software is released and fully tested. Testing of the final increment is expected to begin in 2014 and continue through 2016. Initial operational test and evaluation (IOT&E) is scheduled to begin in 2017. This date is dependent on successful completion of development test and evaluation. IOT&E evaluates the combat effectiveness and suitability of the aircraft in an operationally realistic environment. Its successful completion is a prerequisite for DOD’s plans to approve the F- 35 for full rate production in 2019. Operational testers have raised concerns about F-35’s current operational capabilities and suitability, readiness for training activities, and the progress of developmental testing. Further, the testing offices in the Office of the Secretary of Defense (OSD) have not approved the latest revision to the test and evaluation master plan because of concerns about the timing and resources available for IOT&E and unacceptable overlap of development with the start of IOT&E. We will continue to monitor the concerns and progress of operational testing during future F-35 reviews. Achieving key performance parameters are critical to the F-35 meeting the warfighter’s operational requirements. They include measures such as range, weapons carriage, mission reliability, and sortie rates. These parameters also cannot be fully verified until the end of IOT&E in 2019. Based on limited information, DOD is currently projecting that the F-35 program is either meeting or close to meeting at least threshold (minimum) performance requirements. While initial F-35 production overran target costs and delivered late, there are several encouraging signs indicating better outcomes in the coming years. Overall, manufacturing and supply operations are improving with the latest data showing labor hours to build the aircraft decreasing, deliveries accelerating, quality measures improving, and parts shortages declining. That said, the program is working through the continuing effects from the F-35’s highly concurrent acquisition strategy. For example, the program is continuing to incur substantial costs for rework to fix deficiencies discovered in testing, but the amount per aircraft is dropping. Nevertheless, continuing discoveries in testing will likely drive additional changes to design and manufacturing processes at the same time production rates increase. DOD’s substantial reductions in near-term procurement quantities have decreased–but not eliminated– the risk from investing billions of dollars on hundreds of aircraft before testing proves the aircraft design and verifies that its performance and reliability meet requirements. Analyses of labor, parts, and quality data, observations on the manufacturing floor, and discussions with defense and contracting officials provide signs that F-35 manufacturing and supply processes are improving. The aircraft contractor is moving down a steep learning curve, which is a measure that the work force is gaining important experience and that processes are maturing as more aircraft are built. Other indicators of improvement include the following: The decrease in labor hours needed to complete aircraft at the prime contractor’s plant as the labor force gains experience. For example, the first Air Force production jet was delivered in May 2011 and required about 149,000 labor hours at the prime’s plant to build, while an Air Force jet delivered in December 2012 only required about 94,000 labor hours. Overall, the contractor reported a 37 percent reduction in direct labor during 2012. The improvement in the contractor’s labor efficiency rate, a measure of how long it is taking to complete certain work tasks against engineering standards. Labor efficiency on the first production aircraft was 6 percent and improved to 13 percent for the 31st production aircraft. While still low, Defense Contract Management Agency officials stated that the rate should continue to improve with increased production due to work force learning and factory line enhancements. The decrease in span times—the number of calendar days to manufacture aircraft in total and in specific work staging areas. The aircraft contractor is altering assembly line processes to streamline factory flow. As a result, for example, span time in the final assembly area declined by about one-third in 2012 compared to 2011. The increase in factory throughput as the contractor delivered 30 production aircraft in 2012 compared to 9 in 2011. During our plant visit in 2012, we observed an increased level of activity on the manufacturing floor as compared to 2011. The contractor had more tooling in place, had altered and streamlined processes, and had factory expansion plans underway. The decrease in traveled work (work done out of sequence or incomplete items moving to the next work station), parts shortages on the line, and product defects. For example, traveled work declined 90 percent and the defect rate declined almost 80 percent in 2012 compared to 2011. Other quality indicators such as scrap rates and non-conformances also improved from prior years and are trending in a positive direction. These have all been major contributors to past cost increases and schedule delays. The accomplishment of a schedule risk analysis to improve the contractor’s master schedule and related schedules. A schedule risk analysis is a comprehensive evaluation that uses statistical techniques to examine the fidelity of schedule estimates and the likelihood of accomplishing work as scheduled. It provides better and timelier insight into program performance to help identify and resolve schedule roadblocks. The improvement in aircraft contractor manufacturing processes, although not fully mature compared to best practice standards. The aircraft contractor is using statistical process control to bring critical manufacturing processes under control so they are repeatable, sustainable, and consistently producing parts within quality tolerances and standards. The best practice standard is to have all critical manufacturing processes in control by the start of production. Just over one-third of manufacturing processes are currently judged to be capable of consistently producing quality parts at the best practice standard. The contractor has a plan in place to achieve the best practice standard by the start of full-rate production in 2019. We have observed this quality practice on only a few DOD programs. Going forward, effective management of the global supply chain is vital to boost production rates to planned levels, to control costs, and maintain quality. The aircraft contractor is developing a global supply chain of more than 1,500 suppliers. Effective supplier management will be critical to efficient and quality manufacturing at higher annual rates. Currently, a relatively small number of suppliers provide most of the material, but that is expected to change in the future, especially as international firms get more of the business. Management of international supplier base presents unique challenges, including (1) differing U.S. and foreign government policies, (2) differences in business practices, and (3) foreign currency exchange rates. These can complicate relationships and hinder effective supply chain integration. The aircraft contractor is implementing stringent supplier quality management practices. For example, Lockheed Martin officials assess the overall performance of key component suppliers against program goals for production affordability, contract cost growth, delivery times, part shortage occurrences, and field performance, as well as the number of corrective action reports filed against the supplier. In total, key component suppliers are assessed and rated across 23 measures, as applicable and the contractor works with suppliers to improve performance. As discussed earlier, aircraft labor hours to build aircraft are decreasing with more experience and the program is moving down the learning curve as projected. The fifth annual low rate initial production (LRIP) contract was recently negotiated with cost targets reflecting additional gains in efficiency. DOD and contractor officials also expressed confidence that contracts for the 6th and 7th annual buys will be negotiated by this summer and reflect similar performance. The first four LRIP contracts, however, over-ran their target costs, in total by $1.2 billion. According to program documentation, the government’s share of the total overrun is about $756 million under the sharing incentive provisions in these contracts. Cost increases range from 6.5 percent to 16.1 percent more than negotiated costs. LRIP 4, the largest by dollar and number of aircraft, had the smallest percent increase in cost, indicating better performance. Contract costs and increases are summarized in table 3. The contractor has delivered 39 aircraft under LRIP contracts through the end of December 2012–nine in 2011 and 30 in 2012. Figure 3 tracks actual delivery dates against the dates specified in the contracts. Deliveries were late an average of 11 months compared to the contracted dates, but the data shows that the delivery rate has improved considerably. For example, the first two production aircraft were late 16 and 15 months, respectively, whereas the last two delivered were each 2 months late. Fluctuations in some deliveries during mid-2012 are impacts from the labor strike last summer that the contractor does not expect to continue. Other factors contributing to late deliveries include design changes to the aircraft; traveled work; scrap, repair, and rework hours; and parts shortages. In addition to contract cost overruns, the program is incurring substantial costs to retrofit (rework) produced aircraft needed to fix deficiencies discovered in testing. These costs are largely attributable to the substantial concurrency, or overlap, between testing and manufacturing activities. The F-35 program office projects rework costs of about $900 million to fix the aircraft procured on the first four annual procurement contracts. On average, rework adds about $15.5 million to the price of each of the 58 U.S. aircraft under these contracts. Substantial rework costs are forecast to continue through the 10th annual contract (fiscal year 2016 procurement), but at decreasing amounts annually and on each aircraft. The program office projects about $827 million more to rework aircraft procured under the next 6 annual contracts. Government liability for these costs depends on share ratios to be negotiated. The government and Lockheed Martin reached agreement under the LRIP 5 contract that costs for known changes due to concurrency will be shared 50/50. Other cost overruns under this contract will be shared 55/45 until the contract ceiling is reached, at which point the contractor assumes total responsibility for overruns. The lagging cost and schedule performance on the first four production contracts and the high costs of rework can be largely attributed to the continuing effects of the F-35’s highly concurrent acquisition strategy. The program started manufacturing aircraft before designs were stable, before establishing mature manufacturing processes, and before sufficiently testing the design and aircraft performance. A November 2011 report on concurrency by senior level DOD officials confirms these observations. The report states that F-35 testing continues to find technical issues with significant design and production impacts and requiring rework on produced aircraft. It expresses a lack of confidence in the design stability that was lower than expected given the quantities of aircraft procured and potential for more rework costs. Even with the positive trends in manufacturing, cost, and schedule discussed above, the government continues to incur risk by procuring large quantities of aircraft with the majority of testing still ahead. The contractor continues to make major design and tooling changes and alter manufacturing processes concurrent with development testing. Engineering design changes from discoveries during manufacturing and testing are declining in number, but are still substantial and higher than expected from a program this far into production. With extensive testing ahead, discoveries in testing will drive more design changes, possibly impacting manufacturing processes and the supplier base. Figure 4 graphically depicts monthly engineering change “traffic.” The forecast indicates that about one-third of projected design changes in total are to come and will hover around 200 per month through the end of system development, initial operational testing, and start of full rate production in 2019. Demonstrating the reliability of a system is another indicator that the design is stable and ready for production. During system acquisition, reliability growth improvements should occur over time as problems are identified, tested, and fixed, usually through design changes and manufacturing process improvements. We have reported in the past that it is important to demonstrate that the system reliability is on track to meet goals before production begins as changes after production commences One key indicator of F-35 reliability is the can be inefficient and costly.mean flying hours between failures, that is, the average time an aircraft can fly before a maintenance action is required to repair a component or system that is not performing as designed. Figure 5 projects F-35 performance on this indicator compared to 2012 plans and eventual goals. Compared to data from one year ago, each variant demonstrated some reliability growth in 2012, but each is lagging behind its plan. The Marine Corps’ STOVL demonstrated the biggest increase–from 0.5 hours in 2011 to 1.4 hours currently—but it is also the furthest behind plans. We also note that the rates planned by October 2012 were little changed from those established for October 2011. DOD is investing billions of dollars on hundreds of aircraft before the design is stable, testing proves that it works and is reliable, and manufacturing processes mature to where aircraft can be produced in quantity to cost and schedule targets. The department’s substantial reductions in procurement quantities in the past few years lowered this risk, but did not eliminate it.in procuring 121 aircraft through the 2012 buy (the 6th annual procurement lot). According to the new acquisition baseline and flight test schedule, DOD will procure 289 aircraft for $57.8 billion before the end of developmental flight testing (see table 4). Ensuring that the acquisition costs of the F-35 are affordable so that aircraft can be bought in the quantities and time required by the warfighter will be of paramount concern to the Congress, U.S. military, and international partners. Annual acquisition funding requirements for the United States currently average $12.6 billion per year through 2037. Once acquired, the current forecasts of life cycle sustainment costs for the F-35 fleet are considered unaffordable by defense officials. Efforts are under way to try and lower annual operating and support costs. Uncertainties and delays in the F-35 program are forcing new plans for recapitalizing fighter forces and the military services are incurring increased costs to buy, modify, and sustain legacy fighters. The March 2012 acquisition program baseline incorporates the department’s positive restructuring actions since 2010. These actions place the F-35 program on firmer footing, but aircraft are expected to cost more and deliveries to warfighters will take longer than in previous baselines. In terms of acquisition funding requirements, the new baseline projects total development and procurement budget requirements of $316 billion from 2013 through 2037. Figure 6 shows these budget projections. The rebaselined program will require an average of $12.6 billion annually through 2037, an unprecedented demand on the defense procurement budget. Maintaining this level of sustained funding will be difficult in a period of declining or flat defense budgets and competition with other “big ticket items” such as the KC-46 tanker and a new bomber program. When approving the new 2012 program baseline, the acting Undersecretary of Defense for Acquisition, Technology, and Logistics established affordability unit cost targets for each variant to be met by the start of full-rate production in 2019. To meet these targets, the program will have to reduce unit costs by about 26 percent (STOVL), 35 percent (CTOL), and 39 percent (CV) from the unit costs in the fiscal year 2012 budget request. Our analysis indicates that these targets are achievable if the future year prices and quantities used to construct the new baseline are accurate. Some international partners are also expressing concern about F-35 prices and schedule delays. Besides the consequences for international cooperation and fighter force commonality, there are at least two other important financial impacts. First, U.S. future budgets assume the financial quantity benefits of partners purchasing at least 697 aircraft. Second, the current procurement profile for the F-35 projects a rapid buildup in partner buys—195 aircraft through 2017 that comprise about half the total production during the 5-year period 2013 through 2017. If fewer aircraft are procured in total or in smaller annual quantities, unit costs paid by the U.S. and partners will likely rise. To better understand the potential impacts on prices from changes in quantities, OSD’s Cost Assessment and Program Evaluation (CAPE) office did a sensitivity analysis to forecast impacts on F-35 average procurement unit costs assuming various quantities purchased by the United States and international partners. For example, if the United States bought its full quantity of 2,443 aircraft and the partners did not buy any aircraft, CAPE calculated that the average unit cost would increase by 6 percent. If the United States bought 1,500 aircraft and the partners bought their expected quantity of 697, unit costs would rise by 9 percent. If the United States bought 1,500 and the partners 0, unit costs would rise 19 percent. In addition to the costs for acquiring aircraft, significant concerns and questions persist regarding the cost to operate and sustain F-35 fleets over the coming decades. The current sustainment cost projection by CAPE for all U.S. aircraft, based on an estimated 30 year service life, exceeds $1 trillion. This raises long-term affordability concerns for the military services and international partners. F-35 operating and support costs (O&S) are currently projected to be 60 percent higher than those of the existing aircraft it will replace. Using current program assumptions of aircraft inventory and flight hours, CAPE recently estimated annual O&S costs of $18.2 billion for all F-35 variants compared to $11.1 billion spent in 2010 to operate and sustain the legacy aircraft. DOD officials have declared that O&S costs of this magnitude are unaffordable and are actively engaged in evaluating opportunities to reduce F-35 life-cycle sustainment costs, such as basing and infrastructure reductions, competitive sourcing, and reliability improvements. IOC dates are critical milestones for the F-35 program because these are the target dates for fielding initial combat capable forces as required by the warfighters when justifying the need for the new weapon system. As shown earlier in table 1, these dates have slipped over time and have not been reset in the new baseline. The military services have been reassessing their needs for several years and have deferred setting new target dates for acquiring warfighting capabilities until operational test plans are better understood. Based on service criteria espoused earlier in the program, it would appear that the earliest possible IOC dates are now 2015 for the Marine Corps and 2017 for the Air Force and Navy. Because of F-35 delays and uncertainties, the military services are extending the service life of legacy aircraft to bridge the gap in F-35 deliveries and mitigate projected shortfalls in fighter aircraft force requirements. In November 2012, we reported current cost estimates of almost $5 billion (in 2013 dollars) to extend the service life of 300 Air Force F-16s and 150 Navy F/A-18s, with additional quantities possible if needed to maintain inventory levels. At the Congress’s behest, the Navy is also buying 41 new F/A-18 E/F Super Hornets at a budgeted cost of about $3.1 billion (then-year dollars). The services will incur additional future sustainment costs to support these new and extended-life aircraft. F-35 delays and uncertainties continue to make it difficult for the services to establish and implement retirement schedules for existing fleets and to develop firm basing and manpower plans for housing and supporting future forces. Overall, the F-35 Joint Strike Fighter program is now moving in the right direction after a long, expensive, and arduous learning process. It still has tremendous challenges ahead. The program must fully validate design and operational performance against warfighter requirements, while, at the same time, making the system affordable so that the United States and partners can acquire new capabilities in the quantity needed and can then sustain the force over its life cycle. Recent restructuring actions have improved the F-35’s prospects for success, albeit at greater costs and further delays. Many of the restructuring actions—more time and resources for development flight testing, reduced annual procurements, the recognition of concurrency risks, independent cost and software assessments, and others—are responsive to our past recommendations. Recent management initiatives, including the schedule risk analysis and the software assessment, also respond to prior recommendations. As a result, we are not making new recommendations in this report. DOD and the contractor now need to demonstrate that the F-35 program can effectively perform against cost and schedule targets in the new baseline and deliver on promises. Until then, it will continue to be difficult for the United States and international partners to confidently plan, prioritize, and budget for the future; retire aging aircraft; and establish basing plans with a support infrastructure. Achieving affordability in annual funding requirements, aircraft unit prices, and life-cycle operating and support costs will in large part determine how many aircraft the warfighter can ultimately acquire, sustain, and have available for combat. DOD provided comments on a draft of this report, which are reprinted in appendix III. DOD concurred with the report’s findings and conclusions. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; the Commandant of the Marine Corps; and the Director of the Office of Management and Budget. The report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at ((202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members making key contributions to this report are listed in Appendix IV. Start of system development and demonstration approved. Primary GAO message Critical technologies needed for key aircraft performance elements not mature. Program should delay start of system development until critical technologies mature to acceptable levels. DOD response and actions DOD did not delay start of system development and demonstration stating technologies were at acceptable maturity levels and will manage risks in development. The program undergoes re- plan to address higher than expected design weight, which added $7 billion and 18 months to development schedule. We recommended that the program reduce risks and establish executable business case that is knowledge-based with an evolutionary acquisition strategy. DOD partially concurred but did not adjust strategy, believing that its approach is balanced between cost, schedule and technical risk. Program sets in motion plan to enter production in 2007 shortly after first flight of the non-production representative aircraft. The program planned to enter production with less than 1 percent of testing complete. We recommended program delay investing in production until flight testing shows that F-35 performs as expected. DOD partially concurred but did not delay start of production because it believed the risk level was appropriate. Congress reduced funding for first two low-rate production buys thereby slowing the ramp up of production. Progress was being made but concerns remained about undue overlap in testing and production. We recommended limits to annual production quantities to 24 a year until flying quantities are demonstrated. DOD non-concurred and felt that the program had an acceptable level of concurrency and an appropriate acquisition strategy. DOD implemented a Mid- Course Risk Reduction Plan to replenish management reserves from about $400 million to about $1 billion by reducing test resources. We believed new plan increased risks and DOD should revise it to address testing, management reserves, and manufacturing concerns. We determined that the cost estimate was not reliable and that a new cost estimate and schedule risk assessment is needed. DOD did not revise risk plan or restore testing resources, stating that it will monitor the new plan and adjust it if necessary. Consistent with a report recommendation, a new cost estimate was eventually prepared, but DOD refused to do a risk and uncertainty analysis. The program increased the cost estimate and adds a year to development but accelerated the production ramp up. Independent DOD cost estimate (JET I) projects even higher costs and further delays. Primary GAO message Moving forward with an accelerated procurement plan and use of cost reimbursement contracts is very risky. We recommended the program report on the risks and mitigation strategy for this approach. DOD response and actions DOD agreed to report its contracting strategy and plans to Congress and conduct a schedule risk analysis. The program completed the first schedule risk assessment with plans to update semi-annually. The Department announced a major restructuring reducing procurement and moving to fixed-price contracts. The program was restructured to reflect findings of recent independent cost team (JET II) and independent manufacturing review team. As a result, development funds increased, test aircraft were added, the schedule was extended, and the early production rate decreased. Costs and schedule delays inhibit the program’s ability to meet needs on time. We recommended the program complete a full comprehensive cost estimate and assess warfighter and IOC requirements. We suggest that Congress require DOD to tie annual procurement requests to demonstrated progress. DOD continued restructuring, increasing test resources and lowering the production rate. Independent review teams evaluated aircraft and engine manufacturing processes. Cost increases later resulted in a Nunn-McCurdy breach. Military services are currently reviewing capability requirements as we recommended. Restructuring continued with additional development cost increases; schedule growth; further reduction in near-term procurement quantities; and decreased the rate of increase for future production. The Secretary of Defense placed the STOVL variant on a 2 year probation; decoupled STOVL from the other variants; and reduced STOVL production plans for fiscal years 2011 to 2013. The restructuring actions are positive and if implemented properly, should lead to more achievable and predictable outcomes. Concurrency of development, test, and production is substantial and provides risk to the program. We recommended the program maintain funding levels as budgeted; establish criteria for STOVL probation; and conduct an independent review of software development, integration, and test processes. DOD concurred with all three of the recommendations. DOD lifted STOVL probation, citing improved performance. Subsequently, DOD further reduced procurement quantities, decreasing funding requirements through 2016. The initial independent software assessment began in and ongoing reviews are planned through 2012. The program established a new acquisition program baseline and approved the continuation of system development, increasing costs for development and procurements and extending the period of planned procurements by 2 years. Primary GAO message Extensive restructuring places the program on a more achievable course. Most of the program’s instability continues to be concurrency of development, test, and production. We recommend the Cost Assessment and Program Evaluation office conduct an analysis on the impact of lower annual funding levels; F-35 program office conducts an assessment of the supply chain and transportation network. DOD response and actions DOD partially concurred with conducting an analysis on the impact of lower annual funding levels and concurred with assessing the supply chain and transportation network. To evaluate F-35 Joint Strike Fighter program performance during calendar year 2012, we compared key management objectives on testing, training, contracting, cost and schedule activities to progress made during the year on each objective. On development flight testing, we interviewed F-35 program office, the aircraft contractor, and the office of the Director of Operational Test and Evaluation (DOT&E) officials on development test plans and results against expectations. We obtained and analyzed data on flights and test points, both planned and accomplished during 2012, and also compared progress against the total plans to complete. We obtained officials’ comments and reports on technical risks. We evaluated progress made and work to complete on major technical risks, including the helmet, logistics system, carrier arresting hook, and structural cracks. We reviewed status of software development and integration, contractor management improvement initiatives, issues on data fusion, and the impacts of late releases of software on the test program. We reviewed key documents related to this objective, including DOT&E’s annual F-35 assessment, the Joint Strike Fighter Operational Test Team Report, and the Independent Software Assessment. To assess manufacturing and supply performance indicators, production results, and design changes, we obtained and analyzed manufacturing contract cost, aircraft delivery, and work performance data through the end of calendar year 2012 to assess progress against plans. We reviewed data and briefings provided by the program office, aircraft contractor, and the Defense Contract Management Agency (DCMA) in order to identify issues and assess impacts on supplier performance, costs of rework, manufacturing labor and quality data, and maturity of design and manufacturing process controls. We also determined reasons for manufacturing cost overruns and delivery delays, discussed program and contractor plans to improve, and projected the impact on development and operational tests. We interviewed contractor and DCMA officials to discuss the Earned Value Management System (EVMS) and Lockheed’s progress in improving its system. We did not conduct our own analysis of EVMS since the system has not yet been re-validated by DCMA. We also reviewed the Office of the Secretary of Defense’s F-35 Joint Strike Fighter Concurrency Quick Look Review. To determine acquisition and sustainment costs going forward, we received briefings by program and contractor officials and reviewed financial management reports, budget briefings annual Selected Acquisition Reports, monthly status reports, performance indicators, and other data through the end of calendar year 2012. We identified changes in cost and schedule, and obtained officials’ reasons for these changes. We reviewed total program funding requirements in the Selected Acquisition Reports since the program’s inception and analyzed fiscal year 2013 President’s Budget data. We used this data to project annual funding requirements through the expected end of the F-35 acquisition in 2037. We obtained and discussed the life cycle operating and support cost projections made by the Cost Assessment and Program Evaluation office and discussed future plans of the Department to try and reduce life cycle sustainment costs. In performing our work, we obtained financial data, programmatic information, and interviewed officials from the F-35 Joint Program Office, Arlington, Virginia; Lockheed Martin Aeronautics, Fort Worth, Texas; Defense Contract Management Agency, Fort Worth, Texas; and the Under Secretary of Defense for Acquisition, Technology, and Logistics, the Director of Operational Test and Evaluation, and the Cost Assessment and Program Evaluation office, all organizations within the Office of the Secretary of Defense in Washington, D.C. We assessed the reliability of DOD and Contractor data by: reviewing existing information about the data, and interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. We conducted this performance audit from August 2012 to March 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact name above, the following staff members made key contributions to this report: Bruce Fairbairn, Assistant Director; Marvin Bonner; Dr. W. Kendal Roberts; Erin Stockdale; Jungin Park; Megan Porter and John Lack. Joint Strike Fighter: DOD Actions Needed to Further Enhance Restructuring and Address Affordability Risks. GAO-12-437. Washington, D.C.: June 14, 2012. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-12-400SP. Washington, D.C.: March 29, 2012. Joint Strike Fighter: Restructuring Added Resources and Reduced Risk, but Concurrency Is Still a Major Concern. GAO-12-525T. Washington, D.C.: March 20, 2012. Joint Strike Fighter: Implications of Program Restructuring and Other Recent Developments on Key Aspects of DOD’s Prior Alternate Engine Analyses. GAO-11-903R. Washington, D.C.: September 14, 2011. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Is Still Lagging. GAO-11-677T. Washington, D.C.: May 19, 2011. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Still Lags. GAO-11-325. Washington, D.C.: April 7, 2011. Joint Strike Fighter: Restructuring Should Improve Outcomes, but Progress Is Still Lagging Overall. GAO-11-450T. Washington, D.C.: March 15, 2011. Tactical Aircraft: Air Force Fighter Force Structure Reports Generally Addressed Congressional Mandates, but Reflected Dated Plans and Guidance, and Limited Analyses. GAO-11-323R. Washington, D.C.: February 24, 2011. Defense Management: DOD Needs to Monitor and Assess Corrective Actions Resulting from Its Corrosion Study of the F-35 Joint Strike Fighter. GAO-11-171R. Washington D.C.: December 16, 2010. Joint Strike Fighter: Assessment of DOD’s Funding Projection for the F136 Alternate Engine. GAO-10-1020R. Washington, D.C.: September 15, 2010. Tactical Aircraft: DOD’s Ability to Meet Future Requirements is Uncertain, with Key Analyses Needed to Inform Upcoming Investment Decisions. GAO-10-789. Washington, D.C.: July 29, 2010. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-10-388SP. Washington, D.C.: March 30, 2010. Joint Strike Fighter: Significant Challenges and Decisions Ahead. GAO-10-478T. Washington, D.C.: March 24, 2010. Joint Strike Fighter: Additional Costs and Delays Risk Not Meeting Warfighter Requirements on Time. GAO-10-382. Washington, D.C.: March 19, 2010. Joint Strike Fighter: Significant Challenges Remain as DOD Restructures Program. GAO-10-520T. Washington, D.C.: March 11, 2010. Joint Strike Fighter: Strong Risk Management Essential as Program Enters Most Challenging Phase. GAO-09-711T. Washington, D.C.: May 20, 2009. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-09-326SP. Washington, D.C.: March 30, 2009. Joint Strike Fighter: Accelerating Procurement before Completing Development Increases the Government’s Financial Risk. GAO-09-303. Washington D.C.: March 12, 2009. Defense Acquisitions: Better Weapon Program Outcomes Require Discipline, Accountability, and Fundamental Changes in the Acquisition Environment. GAO-08-782T. Washington, D.C.: June 3, 2008. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-08-467SP. Washington, D.C.: March 31, 2008. Joint Strike Fighter: Impact of Recent Decisions on Program Risks. GAO-08-569T. Washington, D.C.: March 11, 2008. Joint Strike Fighter: Recent Decisions by DOD Add to Program Risks. GAO-08-388. Washington, D.C.: March 11, 2008. Tactical Aircraft: DOD Needs a Joint and Integrated Investment Strategy. GAO-07-415. Washington, D.C.: April 2, 2007. Defense Acquisitions: Analysis of Costs for the Joint Strike Fighter Engine Program. GAO-07-656T. Washington, D.C.: March 22, 2007. Joint Strike Fighter: Progress Made and Challenges Remain. GAO-07-360. Washington, D.C.: March 15, 2007. Tactical Aircraft: DOD’s Cancellation of the Joint Strike Fighter Alternate Engine Program Was Not Based on a Comprehensive Analysis. GAO-06-717R. Washington, D.C.: May 22, 2006. Defense Acquisitions: Major Weapon Systems Continue to Experience Cost and Schedule Problems under DOD’s Revised Policy. GAO-06-368. Washington, D.C.: April 13, 2006. Defense Acquisitions: Actions Needed to Get Better Results on Weapons Systems Investments. GAO-06-585T. Washington, D.C.: April 5, 2006. Tactical Aircraft: Recapitalization Goals Are Not Supported by Knowledge-Based F-22A and JSF Business Cases. GAO-06-487T. Washington, D.C.: March 16, 2006. Joint Strike Fighter: DOD Plans to Enter Production before Testing Demonstrates Acceptable Performance. GAO-06-356. Washington, D.C.: March 15, 2006. Joint Strike Fighter: Management of the Technology Transfer Process. GAO-06-364. Washington, D.C.: March 14, 2006. Tactical Aircraft: F/A-22 and JSF Acquisition Plans and Implications for Tactical Aircraft Modernization. GAO-05-519T. Washington, D.C: April 6, 2005. Tactical Aircraft: Opportunity to Reduce Risks in the Joint Strike Fighter Program with Different Acquisition Strategy. GAO-05-271. Washington, D.C.: March 15, 2005. | The F-35 Lightning II, the Joint Strike Fighter, is DOD's most costly and ambitious aircraft acquisition. The program is developing and fielding three aircraft variants for the Air Force, Navy, Marine Corps, and eight international partners. The F-35 is critical to long-term recapitalization plans as it is intended to replace hundreds of existing aircraft. This will require a long-term sustained funding commitment. Total U.S. investment is nearing $400 billion to develop and procure 2,457 aircraft through 2037. Fifty-two aircraft have been delivered through 2012. The F-35 program has been extensively restructured over the last 3 years to address prior cost, schedule, and performance problems. GAO's prior reviews of the F-35 made numerous recommendations to improve outcomes, such as increasing test resources and reducing annual procurement quantities. This report, prepared in response to the National Defense Authorization Act for 2010, addresses (1) F-35 program performance during 2012, including testing, technical risks, and software; (2) manufacturing performance indicators, production results, and design changes; and (3) acquisition and sustainment costs going forward. GAO's work included analyses of a wide range of program documents and interviews with defense and contractor officials. The F-35 program achieved 7 of 10 key management objectives for 2012 and made substantial progress on one other. Two objectives on aircraft deliveries and a corrective management plan were not met. Also in 2012, the program conducted more developmental flight tests than planned and made considerable progress in addressing critical technical risks, such as the helmet-mounted display. With about one-third of development flight testing completed, much testing remains to demonstrate and verify F-35 performance. Software management practices are improved, but with significant challenges ahead as software integration and testing continue to lag behind plans. Manufacturing and supply processes are also improving--indicators such as factory throughput, labor efficiency, and quality measures are all positive. While initial F-35 production overran target costs and delivered aircraft late, the latest data shows labor hours decreasing and deliveries accelerating. The program is working through the continuing effects from its concurrent acquisition strategy that overlapped testing and manufacturing activities. For example, the program is continuing to incur substantial costs for rework to fix deficiencies discovered in testing, but the amount of rework needed on each aircraft is dropping. Going forward, ensuring affordability--the ability to acquire aircraft in quantity and to sustain them over the life cycle--is of paramount concern. With more austere budgets looming, F-35 acquisition funding requirements average $12.6 billion annually through 2037. The new F-35 acquisition baseline incorporates the Department of Defense's (DOD) positive restructuring actions taken since 2010, including more time and funding for development and deferred procurement of more than 400 aircraft to future years. These actions place the F-35 program on firmer footing, although aircraft will cost more and deliveries to warfighters will take longer. The program continues to incur financial risk from its plan to procure 289 aircraft for $57.8 billion before completing development flight testing. Meanwhile, the services are spending about $8 billion to extend the life of existing aircraft and to buy new ones to mitigate shortfalls due to F-35 delays. GAO is not making recommendations in this report. DOD's restructuring of the F-35 program and other actions are responsive to many prior recommendations. DOD agreed with GAO's report findings and conclusions. |
Emerging infectious diseases pose a growing health threat to people in this country and around the world. The causes of this increase are complex and often difficult to anticipate. For example, increased development, deforestation, and other environmental changes have brought people into contact with animals or insects that harbor diseases only rarely encountered before. Not all emerging infections are unfamiliar diseases, however. Some pathogens have developed resistance to the antibiotics that brought them under control just a generation ago. Moreover, the threefold increase in international travel during the past 20 years and greater importation of fresh foods across national borders allow infectious diseases to spread rapidly. As these diseases travel, they interact with growing numbers of people who have weakened immunity, such as transplant recipients, elderly persons, patients treated with radiation, and those infected with HIV/AIDS. With the introduction of antibiotics in the 1940s and the development of vaccines for diseases like polio, there was widespread optimism that infectious diseases could be eliminated completely. As a result, public health officials shifted some monitoring efforts to other health problems, such as chronic diseases. By 1986, CDC had discontinued surveillance of drug-resistance trends in tuberculosis. The resurgence of tuberculosis and the appearance of HIV/AIDS thus caught the nation’s public health system off guard. Today, infectious diseases account for considerable health care costs and lost productivity. In the United States, an estimated one-fourth of all doctor visits are for infectious diseases. Foodborne illnesses, some of which were unrecognized 20 years ago, are estimated to cause up to 33 million cases and 9,000 deaths annually and to cost as much as $22 billion a year. The number of pathogens resistant to one or more previously effective antibiotics is increasing rapidly, adding to health care costs and threatening to return the nation to the pre-antibiotic era. Antibiotic resistance limits effective treatment options, with potentially fatal results. Resistant infections that people acquire during hospitalizations are estimated to cost as much as $4 billion and cause 19,000 deaths a year. Surveillance is public health officials’ most important tool for detecting and monitoring both existing and emerging infectious diseases. Without an adequate surveillance system, local, state, and federal officials cannot know the true scope of existing health problems and may not recognize new diseases until many people have been affected. They rely on surveillance data to focus their staff and dollar resources on preventing and controlling the diseases that most threaten populations within their jurisdictions. Health officials also use surveillance data to monitor and evaluate the effectiveness of prevention and control programs. Because known diseases can become emerging infections by changing in unanticipated ways, the methods for detecting emerging infections are the same ones used to monitor infectious diseases generally. These methods can be characterized as passive or active. When using passive surveillance methods, public health officials notify laboratory and hospital staff, physicians, and other relevant sources about disease data they should report. These sources in turn must take the initiative to provide data to the health department, where officials analyze and interpret the information as it comes in. Under active surveillance, public health officials contact people directly to gather data. For example, state or local health department staff could call commercial laboratories each week to ask if any tests conducted for cryptosporidiosis yielded positive results. Active surveillance produces more complete information than passive surveillance, but it takes more time and costs more. Infectious diseases surveillance in the United States depends largely on passive methods of collecting disease reports and laboratory test results. Consequently, the surveillance network relies on the participation of health care providers, private laboratories, and state and local health departments across the nation. States have principal responsibility for protecting the public’s health and, therefore, take the lead role in conducting surveillance. Each state decides for itself which diseases will be reported to its health department, where reports should be submitted, and which information it will then pass on to CDC. The surveillance process usually begins when a person with a reportable disease seeks care. To help determine the cause of the patient’s illness, a physician may rely on a laboratory test, which could be performed in the physician’s own office, a hospital, an independent clinical laboratory, or a public health laboratory. State and local health departments that provide clinical services also generate laboratory test results for infectious diseases surveillance. Local health departments are often the first to receive the reports of infectious diseases generated by physicians, hospitals, and others. Health department staff collect these reports, check them for completeness, contact health care professionals to obtain missing information or clarify unclear responses, and forward them to state health agencies. Staff resources devoted to disease reporting vary with the overall size and mission of the health department. Since nearly half of local health agencies have jurisdiction over a population of fewer than 25,000, many cannot support a large, specialized staff to work on disease reporting. In state health departments, epidemiologists analyze data collected through the disease reporting network, decide when and how to supplement passive reporting with active surveillance methods, conduct outbreak and other disease investigations, and design and evaluate disease prevention and control efforts. They also transmit state data to CDC, providing routine reporting on selected diseases. Many state epidemiologists and laboratory directors provide the medical community with information obtained through surveillance, such as rates of disease incidence and prevailing patterns of antimicrobial resistance. Federal participation in the infectious diseases surveillance network focuses on CDC activities—particularly those of the National Center for Infectious Diseases (NCID), which operates CDC’s infectious diseases laboratories. CDC analyzes the data furnished by states to (1) monitor national health trends, (2) formulate and implement prevention strategies, and (3) evaluate state and federal disease prevention efforts. CDC routinely provides public health officials, medical personnel, and others information on disease trends and analyses of outbreaks. Through NCID and other units—such as the National Immunization Program and the National Center for HIV, Sexually Transmitted Diseases, and Tuberculosis Prevention (NCHSTP)—CDC offers an array of scientific and financial support for state infectious diseases surveillance, prevention, and control programs. NCID officials said that most of their 1,100 staff and $186 million budget in fiscal year 1998 were devoted to assisting state infectious diseases efforts. For example, CDC provides testing services and consultation not available at the state level; training on infectious diseases and laboratory topics, such as testing methods and outbreak investigations; and grants to help states conduct diseases surveillance.The Epidemiology Program Office provides training and technical assistance related to software for disease reporting and oversees data integration efforts. Public health and private laboratories are a vital part of the surveillance network because only laboratory results can definitively identify pathogens. In addition, they often are an essential complement to a physician’s clinical impressions. According to public health officials, the nation’s 158,000 laboratories are consistent sources of passively reported information for infectious diseases surveillance. Independent commercial and hospital laboratories may also share with public health agencies information gathered through their private surveillance efforts, such as studies of patterns of antibiotic resistance or the spread of diseases within a hospital. Every state has at least one state public health laboratory to support its infectious diseases surveillance activities and other public health programs. Some states operate one or more regional laboratories to serve different parts of the state. In five states—Iowa, Nebraska, Nevada, Ohio, and Wisconsin—academic institutions, such as university medical schools, provide public health laboratory testing. State laboratories conduct testing for routine surveillance or as part of special clinical or epidemiologic studies. These laboratories provide diagnostic tests for rare or unusual pathogens that are not always available in commercial laboratories or tests for more common pathogens that use new technology still needing controlled evaluation. State public health laboratories provide specialized testing for low-incidence, high-risk diseases, such as tuberculosis and botulism. Testing they provide during an outbreak contributes greatly to tracing the spread of the outbreak, identifying the source, and developing appropriate control measures. Epidemiologists rely on state public health laboratories to document trends and identify events that may indicate an emerging problem. Many state laboratories also provide licensing and quality assurance oversight of commercial laboratories. State public health laboratories are increasingly able to use new advanced molecular technology to identify pathogens at the molecular level. Often, these tests provide information that is used not to diagnose and treat individual patients but to tell epidemiologists whether cases of illness are caused by the same strain of pathogen—information that is not available from clinical records or other conventional epidemiologic methods. Public health officials have already used this type of laboratory information to identify the movement of diseases through a community in ways that would not have been possible 5 years ago. For example, staff in Minnesota’s laboratory use a molecular technology called pulsed field gel electrophoresis (PFGE) to test “isolates” (isolated quantities of a pathogen) of E. coli O157:H7 that laboratories in the state must submit. From 1994 to 1995, the resulting DNA fingerprint patterns identified 10 outbreaks—almost half of which would not have been identified by traditional surveillance methods. Using the laboratory results, epidemiologists were able to find the sources of contamination and eliminate them, thus preventing additional infections. CDC laboratories provide highly specialized tests not always available in state public health or commercial laboratories and assist states with testing during outbreaks. The staff at CDC’s laboratories also have a broad range of expertise identifying pathogens. These laboratories help diagnose life-threatening, unusual, or exotic infectious diseases; provide information on cases of infectious diseases for which satisfactory tests are not widely or commercially available; and confirm public or private laboratory test results that were atypical or difficult to interpret. According to NCID officials, CDC laboratories provide testing services and consultations on conducting tests or interpreting results to every state. CDC also conducts research to develop improved diagnostic methods and trains state laboratory staff to use them. While state surveillance and laboratory testing programs are extensive, not all include every significant emerging infectious disease, leaving gaps in the nation’s surveillance network. Each state decides which diseases it includes in its surveillance program and which diseases it routinely reports to CDC. Many state epidemiologists believe their surveillance programs need to add or focus more attention on important infectious diseases, including hepatitis C and antibiotic-resistant diseases. Our survey found that almost all states conduct surveillance of E. coli O157:H7, tuberculosis, pertussis, and hepatitis C, but fewer collect information on cryptosporidiosis and penicillin-resistant S. pneumoniae. State public health laboratories commonly perform tests to support state surveillance programs for E. coli O157:H7, tuberculosis, pertussis, and cryptosporidiosis. Most, however, do not test for hepatitis C and penicillin-resistant S. pneumoniae. Slightly more than half the state laboratories use PFGE, which state and CDC officials believe could be valuable to most or all states’ diseases surveillance efforts. Few states have followed CDC’s suggestion to improve surveillance by requiring medical providers and laboratories to routinely submit specimens for testing in state public health laboratories. Each year, the Council of State and Territorial Epidemiologists (CSTE), in consultation with CDC, reviews the list of infectious diseases that are “nationally notifiable”—that is, important enough for the nation as a whole to merit routine reporting to CDC. The list currently includes 52 infectious diseases. States are under no obligation to adopt the nationally notifiable diseases for their own surveillance programs, and state reporting to CDC is voluntary. A 1997 CSTE survey of state health departments found that 87 percent of states included at least 80 percent of the 52 nationally notifiable diseases in their surveillance programs, and about one-third of states included over 90 percent. Lists of state reportable diseases vary considerably, partly because of differences in the extent to which diseases occur in different regions of the country. Of the six diseases covered by our survey, nearly all the states include at least four in their diseases surveillance—most commonly tuberculosis, E. coli O157:H7, pertussis, and hepatitis C. A slightly smaller number of states include cryptosporidiosis in their surveillance programs. Penicillin-resistant S. pneumoniae was covered least often, with about two-thirds of the states including it. For all of the diseases except penicillin-resistant S. pneumoniae, most states require health care providers, laboratories, and others to submit disease reports to public health officials. These reports contain information such as demographic characteristics of the ill person, the date disease symptoms appeared, and the suspected or confirmed diagnosis. (See fig. 1.) Over three-quarters (44) of the responding epidemiologists told us that their surveillance programs either leave out or do not focus sufficient attention on important infectious diseases. Antibiotic-resistant diseases, including penicillin-resistant S. pneumoniae, and hepatitis C were among the diseases they cited most often as deserving greater attention. State laboratory testing to support state surveillance of the six emerging infections in our survey varies across the nation. Testing is most common for four of the six: tuberculosis, E. coli O157:H7, pertussis, and cryptosporidiosis (see fig. 1). In 43 of the 54 state responses we analyzed,the state public health laboratory conducts testing for four or more of the diseases included in its state’s surveillance program. Testing to support state surveillance of hepatitis C and penicillin-resistant S. pneumoniae occurs in fewer than half of the states. State and CDC officials believe that most, and possibly all, states should have PFGE technology, which can be used to study many diseases and greatly improves the ability to detect outbreaks. However, for the diseases we asked about in our survey, state public health laboratories are less likely to use advanced molecular technology than more conventional techniques. For example, slightly more than half the state laboratories reported using PFGE technology to support state surveillance efforts. Twenty-nine of the 54 laboratory directors responding to our survey reported using PFGE to support E. coli O157:H7 surveillance, and nine of these laboratories also use it for pertussis surveillance. If a state laboratory provided testing in support of state-level surveillance of a specific disease, we asked directors to assess the adequacy of their testing equipment for that disease. Laboratory directors’ views about the adequacy of the testing equipment they use varied somewhat by disease but were generally positive. Eighty percent or more of the laboratory directors rated their equipment as generally or very adequate for four diseases—tuberculosis, E. coli O157:H7, cryptosporidiosis, and hepatitis C. Percentages were slightly lower for pertussis (69 percent) and penicillin-resistant S. pneumoniae (68 percent). State epidemiologists’ views about the adequacy of the testing information provided by state laboratories vary considerably by disease. More than 94 percent rated their state laboratory as very or generally adequate to provide testing information for tuberculosis and E. coli O157:H7. More than 70 percent said their state laboratory is generally or very adequate for generating information on pertussis and cryptosporidiosis. In contrast, only about one-third of epidemiologists said the information generated by their state laboratory for hepatitis C (32 percent) and penicillin-resistant S. pneumoniae (37 percent) is generally or very adequate. We also found that many states do not require other public and private laboratories or medical providers to submit to the state public health laboratory specimens or isolates from persons with certain diseases. CDC has urged states to consider developing such laws because gathering specimens from across the state helps ensure that the state’s surveillance data include a diverse sample of the state’s population. Such action by states also contributes to more comprehensive national data. In all, 29 states require specimens for one or more of the six diseases in our survey: 5 states require specimens for four diseases, 4 states require specimens for three diseases, 9 states for two, and 11 for one disease. Specimens of tuberculosis and E. coli O157:H7 are required most frequently. As part of our survey and field interviews, we asked state officials to identify the problems they considered most significant in conducting surveillance of emerging infectious diseases. The problems they cited fall principally into two categories: staffing and information sharing. State epidemiologists reported that staffing constraints prevent them from undertaking surveillance of diseases they consider important. Laboratory directors told us they do not always have enough staff to conduct tests needed for surveillance; furthermore, their staff need training to remain current with technological advances. Epidemiologists and laboratory officials both said that public health officials often lack either basic computer equipment or integrated data systems that would allow them to rapidly share surveillance-related information with public and private partners. Public health officials reported that the nation’s infectious diseases surveillance system is basically sound but could improve its ability to detect emerging threats. Most state officials believe they need to expand their infectious diseases surveillance programs. However, both state laboratory directors and epidemiologists said that such expansion has been constrained by staffing and training limitations. Most of the 44 epidemiologists who reported that they need to expand coverage of important infectious diseases said insufficient staff and funding resources prevent them from taking this action. Some noted that they need more and better trained staff just to do a better job on diseases already included in their programs. We found considerable variability among states in laboratory and epidemiology staffing per 1 million population. In total, we found that during fiscal year 1997, states devoted a median of 8 staff years per 1 million population to laboratory testing of infectious diseases. Laboratory staff year medians for individual types of testing ranged from 0.4 for foodborne pathogens to 2.4 for all other infectious diseases not specifically listed in table 1. The median for total epidemiology staff years per 1 million population was 14; the range was from 0.1 for foodborne pathogens to 5 for HIV/AIDS. (See table 1.) The majority of state laboratory directors indicated that their staffing resources are generally adequate to generate test results for the diseases in our study. For each of the four diseases that state laboratories most commonly support, more than 75 percent of directors rated their staff as generally or very adequate to perform the tests. Among the smaller number of state laboratories that conduct tests to support surveillance of hepatitis C and penicillin-resistant S. pneumoniae, a smaller percentage of laboratory directors considered their staff resources at least adequate (68 percent and 58 percent, respectively). Some state laboratory and epidemiology officials told us that staffing constraints prevent them from making full use of testing capacity. For example, the laboratory director in a state that had acquired PFGE technology cited lack of staff time as one reason for not routinely using PFGE in surveillance of E. coli O157:H7. As a result, he said, the incidence of E. coli O157:H7 in his state is probably understated. If resources were available, he would also like laboratory staff to test pertussis specimens collected during a recent outbreak to determine whether the increase in reported cases was a true outbreak or the result of increased awareness—and reporting—of the disease following the death of a child. Thirty-six state laboratory directors reported having vacancies during the past year and said the vacancies had negatively affected their laboratory’s ability to support their state’s infectious diseases surveillance activities. Nine rated the impact as great or significant. Administrative and financial constraints, such as hiring freezes or budget reductions, were most often responsible for the vacancies. Laboratory officials noted that advances in scientific knowledge and the proliferation of molecular testing methods have created a need for training to update the skills of current staff. They reported that such training is often either unavailable or inaccessible because of funding or administrative constraints. For example, several state officials said that in reducing costs, training budgets are often cut first. In other states, staff are subject to per capita limits on training or travel expenses. Therefore, if CDC or another source provided additional funding, these funds could not be used. For health crises that need an immediate response—as when a serious and highly contagious disease appears in a school or among restaurant staff—rapid sharing of surveillance information is critical. Public health officials told us, however, that many state and local health departments do not have the basic equipment to efficiently share information across the surveillance network. Computers and other equipment, such as answering or fax machines, that can shorten the process of sharing surveillance information from weeks to a day or less are not always available. Our survey responses indicate that state laboratory directors use electronic communication systems much less often than state epidemiologists use them. Although about three-quarters of responding state laboratory directors use electronic systems to communicate within their laboratories, they do not frequently use electronic systems to communicate with others. Almost 40 percent of laboratory directors reported using computerized systems to little or no extent for receiving surveillance-related data, and 21 percent use them very little for transmitting data. While state epidemiologists use electronic systems more than laboratory directors, they also use them less commonly to receive information (42 percent) than to report it (62 percent). One reason for the limited use of electronic systems may be the lack of equipment. A 1996 CDC survey found that, on average, about 20 percent of staff in most state health agencies did not have access to desktop computers that were adequate for sharing information rapidly. Forty percent of local health officials responding to a 1996 survey conducted by the National Association of City and County Health Officials said they lacked such equipment. State and local health officials most often attributed the lack of computer equipment and integrated data processing and management systems to insufficient funding. The absence of equipment means some tasks that could be automated must be done by hand—and in some cases must be done by hand even after data have already been processed in electronic form. For example, representatives from two large, multistate private clinical laboratories told us that data stored electronically in their information systems had to be converted to paper so that it could be reported to local health departments. In one state we visited, a local health department mails data stored on disk to the state health agency because it lacks the equipment to transfer the data electronically. Even with adequate computer equipment, the difficulty of creating integrated information systems can be formidable. Not only does technology change rapidly, but public health data are currently stored in thousands of places, including the record and information systems of public health agencies and health care institutions, individual case files, and data files of surveys and surveillance systems. These data are in isolated locations that have differing hardware and software structures and considerable variation in how the data are coded, particularly for laboratory test results. CDC operates over 100 data systems to monitor over 200 health events, such as specific infectious diseases. Many of these systems collect data from state surveillance programs. This patchwork of data systems arose, in part, to meet CDC and state needs for more detailed information for particular diseases than was usually reported. For example, while information collected to determine incidence rates of many nationally notifiable diseases consists of minimal geographic and demographic data, the information collected to determine incidence rates of tuberculosis includes information on personal behavior, the presence of other diseases, and stays in institutional settings, as well as geographic and demographic data. The additional information collected on tuberculosis also helps guide prevention and control strategies. Public health officials told us that the multitude of databases and data systems, software, and reporting mechanisms burdens staff at state and local health agencies and leads to duplication of effort when staff must enter the same data into multiple systems that do not communicate with one another. Furthermore, the lack of integrated data management systems can hinder laboratory and epidemiologic efforts to control outbreaks. For example, in 1993 the lack of integrated systems impeded efforts to control the hantavirus outbreak in the Southwest. Data were locked into separate databases that could not be analyzed or merged with others, requiring public health investigators to analyze individual paper printouts. State officials also raised concerns about a lack of complete data for surveillance and the increased reliance on fees to fund state laboratories, which they believe undermine their infectious diseases surveillance efforts. Public health officials and experts acknowledge that, even when states require reporting, the completeness of data reported varies by disease and type of provider. As might be expected, reporting of severe and life-threatening diseases is more complete than reporting of mild diseases. However, when mild diseases are not reported, outbreaks affecting a large number of people may go unnoticed until deaths occur among people at higher than normal risk. In addition, reporting by practitioners in frequent contact with infectious diseases, such as family practitioners, is more complete than reporting by those who are not, such as surgeons. Although surveillance need not be complete to be useful, underreporting can adversely affect public health efforts by leading to erroneous conclusions about trends in incidence, risk factors for contracting a disease, appropriate prevention and control measures, and treatment effectiveness. Completeness of reporting is a concern for the surveillance of illnesses that can produce mild symptoms, such as diarrheal illnesses, which include many foodborne and waterborne conditions. Reported cases of some illnesses represent the tip of the iceberg, at best. A recent CDC-sponsored study estimated that 340 million annual episodes of acute diarrheal illness occurred in the United States, but only 7 percent of people who were ill sought treatment. The study further estimated that physicians requested laboratory testing of a stool culture for 22 percent of those patients who sought treatment, which produced about 6 million test results that could be reported. In cases of mild diarrheal illness, physicians may not request laboratory tests to identify the pathogen because patients with these diseases can get better without treatment or effective treatments do not exist. Public health officials expressed varying views about how managed care growth and the consolidation of the laboratory industry might affect the completeness of surveillance data. Some public health officials and physicians believe that managed care—with its emphasis on controlling costs—could lead doctors to order fewer diagnostic tests, particularly those not needed for treatment decisions. Also, to the extent that managed care organizations less frequently use specialists, results from specialized tests they employ would not be generated. Concerns about laboratory consolidation—particularly when specimens are shipped to central testing facilities in other states—stem from fears that out-of-state testing centers will not report test results needed for surveillance, possibly because they might not be aware of state reporting requirements regarding what information should be reported and where to direct it. In two states we visited, representatives of large multistate independent laboratories said their policy is to report test results in accordance with state requirements. One representative provided us with documentation showing the various reporting requirements of states in one region served by the laboratory. Each of these laboratories is participating in electronic laboratory reporting pilot programs in different states. Other CDC and state public health officials believe that managed care organizations and concentrated ownership of laboratories could provide information that is potentially more consistent, complete, and reliable than what public health officials now routinely obtain through passive reporting. They argue that because information on a large number of patients is concentrated in a small number of organizations, the number of contacts for active surveillance projects is smaller and more manageable and information can be analyzed from large databases. Moreover, they add, these organizations are likely to collect and store laboratory data electronically, which could speed disease reporting. Our survey asked epidemiologists whether they or other agencies in their states had evaluated the impacts of managed care and laboratory consolidation on surveillance data; we could identify no systematic evaluations on this issue. Similarly, researchers who conducted a survey for HHS did not find data that address concerns about the impact of managed care. Another concern state officials frequently mentioned is an increasing reliance on fees to fund the operations of state public health laboratories. Over 30 laboratory directors responding to our survey said their budgets were partly supported by fees for genetic screening and tests for regulatory and licensure programs. State officials told us that an imbalance of fees in relation to appropriated funding shifts the focus of laboratory operations away from testing services beneficial to the entire community and toward services that can be successfully marketed—a shift that they believe could jeopardize fulfilling their public health mission. One state laboratory director said that over the past 15 years, state funding has declined by more than half and fees are expected to cover the difference. He believes that if the laboratory loses contracts for genetic or blood lead-level testing, he will have to reduce other testing, such as for sexually transmitted diseases or CDC’s influenza surveillance. Although many state officials are concerned about their staffing and technology resources, public health officials have not developed a consensus definition of the minimum capabilities that state and local health departments need to conduct infectious diseases surveillance. For example, according to CDC and state health officials, there are no standards for the types of tests state public health laboratories should be able to perform; nor are there widely accepted standards for the epidemiological capabilities state public health departments need. Public health officials have identified a number of elements that might be included in a consensus definition, such as the number and qualifications of laboratory and epidemiology staff; the pathogens that each state laboratory should be able to identify and, where relevant, test for antibiotic resistance; specialized laboratory and epidemiology capability that should be available regionally; laboratory and information-sharing technology each state should have; and support services that CDC should provide. Recognizing this lack of guidance, CSTE, the Association of Public Health Laboratories (APHL), and CDC have begun collaborating to define the staff and equipment components of a national surveillance system for infectious diseases and other conditions. Their work is to include agreements about the laboratory and epidemiology resources needed to conduct surveillance, diseases that should be under surveillance, and the information systems needed to share surveillance data. One goal of reaching this consensus would be to give state and local health agencies the basis for setting priorities for their surveillance efforts and determining the resources needed to implement them. CDC provides state and local health departments with a wide range of technical, financial, and staff resources to help maintain or improve their ability to detect and respond to disease threats. Many state laboratory directors and epidemiologists said this assistance has been essential to their ability to conduct infectious diseases surveillance and to take advantage of new laboratory technology. However, a small number of laboratory directors and epidemiologists believe CDC’s assistance has not added much to their ability to conduct surveillance of emerging infections, and many state officials indicated that further improvements are needed, particularly in the area of information-sharing systems. CDC’s various units, particularly NCID, provide an array of technical and financial support for state infectious diseases surveillance programs. In general, this support falls into the following six areas: testing and consulting, training, grant assistance, funding for regional laboratories, staffing assistance, and information-sharing systems. Laboratory testing and consultation. CDC staff and laboratories support state infectious diseases surveillance efforts with technical assistance and testing services that may not be available at the state level. CDC staff provide consultation services on such matters as epidemiological methods and analysis, laboratory techniques, and interpretation of laboratory results. Almost all of the state laboratory directors and epidemiologists responding to our survey said they use CDC’s laboratory testing services and frequently consult with CDC staff. Training. CDC provides public health and medical personnel with training on a wide range of topics. The training is offered through such means as interactive audio- or video-conferences, computer-assisted instruction, seminars, and hands-on workshops. Since 1989, CDC has offered laboratory training through a collaboration with APHL. An APHL and CDC assessment identified the need for training on current advances in food microbiology, fungal and viral infections, rabies, tuberculosis, and new and emerging pathogens. To meet these needs, CDC developed a series of courses incorporating hands-on experience, offered in various locations around the country. State laboratory directors and epidemiologists indicated they use CDC training extensively, and most said they participated in CDC-sponsored training in 1997. Grant programs. CDC’s various grant and staffing assistance programs provide at least some support to the infectious diseases surveillance programs of all states. In fiscal year 1998, NCID distributed $31.2 million of its $185.7 million budget to state and local health agencies for infectious diseases programs. NCID supports three major grant programs that aid state surveillance programs for emerging infectious diseases (see table 2).Together these three grant programs provided about $20 million to state and local health departments in fiscal year 1997. EIP and ELC grants, designed to strengthen and enhance state surveillance abilities, are components of CDC’s overall plan to address emerging infectious diseases. Funding for regional laboratory networks. To help with both state-specific and nationwide control and prevention efforts, CDC has sponsored development of regional laboratory networks that give states access to molecular testing services that may not be available in their own state laboratory. The two main laboratory networks are PulseNet, which currently focuses on E. coli O157:H7, and the Tuberculosis Genotyping Network (see table 3). Staffing assistance. CDC provides a small number of staff resources to assist state infectious diseases programs through 2-year Epidemic Intelligence Service (EIS) placements and fellowships in state or local health departments or laboratories. About one-fourth of the 60 to 80 EIS participants selected each year work in state and local health departments. Additionally, by February 1998, CDC had trained 18 laboratory fellows to work in state, local, and federal public health laboratories through its Emerging Infectious Diseases Laboratory Fellow Program, a collaborative effort with APHL; CDC plans to make 9 emerging diseases laboratory fellowships available through APHL and the CDC Foundation. One goal of the fellowships is to strengthen the relationship of public health laboratories to infectious diseases and drug-resistance surveillance, prevention, and control efforts. Information sharing. Over the past several decades, CDC has developed and made available to states several general and disease-specific information management and reporting programs. Virtually all states use two of these programs to report data on some infectious diseases to CDC—the Public Health Laboratory Information System (PHLIS) and the National Electronic Telecommunications System for Surveillance (NETSS). PHLIS is used primarily by laboratories; NETSS is used primarily by epidemiology programs. Our surveys showed that overall state laboratory directors and epidemiologists highly value the support CDC provides for their surveillance efforts. Usage and satisfaction levels were highest in the areas of testing and consultation, training, and grant support. The area most often identified as needing improvement was the development of information-sharing systems. Many state laboratory directors and epidemiologists told us that CDC’s testing, consultation, and training services are critical to their surveillance efforts. In all three areas of assistance, more than half of those responding to our survey indicated that the services greatly or significantly improved their state’s ability to conduct surveillance (see fig. 2). According to officials who spoke with us, CDC’s testing for unusual or exotic pathogens and the ability to consult with experienced CDC staff are important, particularly for investigating cases of unusual diseases. However, about 15 percent of survey respondents said CDC’s testing services made only modest improvements in their state’s surveillance capacity. Over 70 percent of epidemiologists responding to our survey said that knowledgeable staff at CDC are easy to locate when they need assistance, but many noted that help with matters involving more than one CDC unit is very difficult to obtain. Many state officials who spoke with us thought that this problem arose because staff in different units do not seem to communicate well with each other. One official described CDC’s units as separate towers that do not interact. A number of state officials commented that CDC provides tests and consultation very promptly when people are at risk—for example during outbreaks of life-threatening diseases—but less quickly in other circumstances. To provide more timely consultation, CDC has developed an on-line image-sharing ability that allows CDC staff and health professionals in remote locations to view an organism under a microscope at the same time. In one state, staff at CDC and a surgeon in another state used this capacity during an operation to identify a parasite as the cause of the patient’s eye problem, allowing the surgeon to rule out cancer as a diagnosis and eliminating the need to remove the patient’s eye. Some state officials and survey respondents said that in less urgent circumstances, CDC’s test results were often not returned quickly enough to be useful to physicians or, in some cases, to epidemiologists. For example, state officials have waited up to a year for CDC to return test results on unusual organisms, making it difficult—if not impossible—to recognize any subsequent encounters with these organisms. Some of these officials suggested that competing priorities at CDC often prevented the timely return of test results in the absence of immediate need. Training is another CDC service that state officials believe is important. As figure 2 shows, the percentage of respondents indicating that training greatly or significantly improved their ability to conduct surveillance of emerging infections was even higher than for testing and consultation. Participant evaluations of recent courses offered in collaboration with APHL were generally consistent with our survey results. These evaluations indicated that the courses provided information the participants needed on the most current technologies available. However, about 11 percent of our survey respondents did not believe that the training they received appreciably improved their surveillance ability. Although state officials generally valued the training CDC provides, they also said more training is needed, especially hands-on, skill-based training in new laboratory techniques. Laboratory officials in particular said that the use of distance learning through audio- or video-conferences—as opposed to hands-on workshops in CDC laboratories—diminished opportunities to develop close collaboration between state and CDC laboratory staff. According to CDC officials, the use of distance learning became desirable when downsizing of staff in state public health laboratories and the costs of sending staff to Atlanta led to declining attendance at courses at CDC headquarters. State officials also cited a need for training and technical assistance in information-sharing systems. Most state officials responding to our survey reported that funding through CDC’s disease-specific grants and epidemiology and laboratory capacity grants had made great or significant improvements in their ability to conduct surveillance for emerging infectious diseases (see fig. 3). Over 70 percent of responding laboratory directors and 80 percent of responding epidemiologists—comprising more than three-quarters of all survey respondents—said disease-specific funding had greatly or significantly enhanced their state’s capacity to conduct infectious diseases surveillance. With one exception, epidemiology, laboratory, and combined capacity grants were similarly valued, with at least 68 percent of recipients saying the enhancement was great or significant. Laboratory directors reported benefitting more from grants specifically directed to laboratory or combined laboratory and epidemiology capacity than from grants specifically designed to enhance epidemiology capacity. Officials cited several examples in which CDC assistance was instrumental in helping states improve their surveillance and laboratory testing efforts for high-priority conditions, such as antibiotic-resistant diseases. After state laboratories began receiving funds from CDC’s tuberculosis grant program, they markedly improved their ability to rapidly identify the disease and indicate which, if any, antibiotics could be used effectively in treatment. State laboratory officials attributed this improvement to the funding and training they received from CDC. In addition to supporting such core activities as active surveillance of antibiotic-resistant conditions, four states use EIP funds to conduct active surveillance of unexplained deaths and severe illnesses in previously healthy people under age 50—a potentially critical source of information to detect new or newly emerging diseases. This project will also provide information on known infectious diseases that health care professionals are not recognizing in their patients. The epidemiologist in one of these states said that although reporting of such cases had been required for a long time, efforts to improve the completeness of the reporting and analyze the data began only after the state received CDC funds. Our survey provided one other possible indication of the effect of CDC’s assistance on state surveillance and testing for antibiotic-resistant conditions. In comparison to its funding for tuberculosis, which goes to programs in all states and selected localities, CDC funds active surveillance and testing for penicillin-resistant S. pneumoniae in only eight states. This pattern of funding parallels the pattern of testing reported by our survey respondents. Of the 54 states that reported conducting surveillance for tuberculosis, 49 have laboratories that test for antibiotic-resistance. In contrast, of the 37 states that reported conducting surveillance for penicillin-resistant S. pneumoniae, only about half have laboratories that provide testing support. Moreover, while all but one of the states require health care providers to submit tuberculosis reports to public health officials, fewer than half require reporting of penicillin-resistant S. pneumoniae. Although CDC-sponsored regional laboratory networks are intended to expand states’ access to advanced testing services, our survey responses indicate that only about half of the states have used these laboratories during the past 3 years. Among those state officials who did use the networks, views on their usefulness are generally favorable, although networks were not valued as highly as other types of assistance (see fig. 4). Of the 19 laboratory directors who used the services of regional laboratories, 10 reported great improvement in their surveillance capacity as a result, 6 reported moderate improvement, and the remaining 3 said improvement was minimal. Of the 21 epidemiologists who used regional laboratory services, 11 reported the services made great improvement, 5 said the improvement was moderate, and 5 said the improvement was slight. Almost two-thirds of the 33 epidemiologists and about half of the 13 laboratory directors who had hosted CDC field placements reported that their staff had greatly or significantly improved their program’s capacity to conduct surveillance. State officials we spoke with generally highly praised field placement programs because participants—who might continue their careers in federal or state government—gained hands-on experience working in state programs. An epidemiologist commented that these placements, which spanned most of the past 20 years, had been invaluable as they provided staff to supplement his state’s surveillance program. One state official, however, said that the benefits of such placements are limited because it takes almost 2 years of training for new staff to effectively assist in state programs. According to officials who spoke with us, CDC’s information-sharing systems have limited flexibility for adapting to state program needs—one reason many states have developed their own information management systems to capture more or different data, they said. State and federal officials told us that NETSS and PHLIS often cannot share data for reporting or analysis with each other or with state- or other CDC-developed systems. CDC officials responsible for these programs said that the most recent versions can share data more readily with other systems but that the lack of training in how to use the programs and high staff turnover at state agencies may limit the number of state staff and officials able to use the full range of program capabilities. NETSS supports the collection and management of information such as patient demographics and residence, the suspected or confirmed diagnosis, and the date of disease onset. PHLIS contains more definitive information on the pathogen provided by the laboratory test. Both programs also offer optional disease-specific reporting modules states may use to gather additional data. When epidemiologists cannot electronically merge data from different sources, they must manually match the records to analyze disease trends and determine the relevant risk factors needed for effective prevention and control efforts. Sharing data between systems also identifies multiple records on the same case and can help epidemiologists take steps to improve reporting. Epidemiologists responding to our survey rated NETSS more highly for flexibility and overall helpfulness than laboratory directors rated PHLIS. About half (48 percent) of responding epidemiologists said NETSS was highly flexible for meeting their needs while only one-quarter (27 percent) of laboratory directors said the same for PHLIS. Fifty-eight percent of epidemiologists said NETSS greatly helped them conduct surveillance, while 22 percent said it was moderately helpful and the remaining 20 percent said it was minimally helpful. In contrast, 76 percent of laboratory directors said PHLIS was of little help, 13 percent said it was very helpful, and 11 percent said it was moderately helpful. Many epidemiologists and laboratory directors thought the system they use does not share data well with other systems. About two-thirds of the laboratory directors who use PHLIS and one-quarter of the epidemiologists who use NETSS said the systems have little to no ability to share data. Many officials we spoke with complained about a substantial drain on scarce staff time to enter and reconcile data into multiple systems, such as their own system plus one or more CDC-developed systems. One large local health department has one person working full time to enter and reconcile data for a single disease. As some of CDC’s disease-specific electronic reporting and information management systems become outdated and need to be replaced, CDC has responded to state and local requests for greater integration of reporting systems and for flexibility in the use of grant funds to build information systems. In late 1995, CDC established the Health Information and Surveillance System (HISS) Board to formulate and enact policy for integrating public health information and surveillance systems. Subcommittees of the HISS Board bring together federal and state public health officials to focus on issues such as data standards and coding schemes, legislation for data security, assessing hardware and software used by states, and identifying gaps in CDC databases. As of August 1998, the HISS Board or its subcommittees had identified barriers to implementing effective laboratory reporting standards and some solutions, established mechanisms to assess information needs and gaps in state and local data systems, and begun to assess ways to integrate NETSS and PHLIS. CDC provides some training and technical assistance related to NETSS and PHLIS, although state officials we interviewed said such training and assistance are in short supply. Responses to our survey suggest that CDC’s training for these two systems was less widely used and less highly valued than its technical assistance. Nearly all respondents used CDC’s technical assistance for these two programs, while two-thirds of laboratory directors and 82 percent of epidemiologists used the training. Almost half of the epidemiologists and 40 percent of the laboratory directors found the technical assistance highly valuable, but less than 30 percent of either group found the training highly valuable. Staff at two local health departments told us that no training was offered to them by state or CDC staff and the wait for technical assistance could last a month or more. State and local officials appreciated the help CDC offered but said CDC had few staff or other resources devoted to helping them use these reporting systems. CDC and the states have made progress in developing more efficient information-sharing systems through one of CDC’s grant programs. The Information Network for Public Health Officials (INPHO) is designed to foster communication between public and private partners, make information more accessible, and allow for rapid and secure exchange of data. By 1997, 14 states had begun INPHO projects. Some had combined these funds with other CDC grant moneys to build statewide networks linking state and local health departments and, in some cases, private laboratories. In New York, state officials developed a network that will link all local health agencies with the state health department and over 4,500 health care facilities and diagnostic laboratories. The network provides electronic mail service and access to surveillance data collected by the state. In Washington, systems for submitting information electronically reduced passive reporting time from 35 days to 1 day and gave local authorities access to health data for analysis. In addition to funding specific projects through INPHO grants, in April 1998 CDC adopted a policy that allows states to submit proposals to use disease grant funds to build integrated information systems. As of November, no states had submitted proposals, although several indicated they planned to do so. This initiative involves no new funding but allows states to use money from existing grants in more flexible ways. While state officials were supportive of additional CDC efforts in this area, they also recognized that progress in developing effective networks could be affected by the actions—or lack of action—of others in the surveillance network. For example, officials in some states said autonomous local health departments may elect not to adopt or link with state-developed systems, thereby continuing some level of fragmentation among data systems regardless of efforts undertaken by CDC or others. Public health officials agree that the importance of infectious diseases surveillance cannot be overemphasized. The nation’s surveillance network is considered the first line of defense in detecting and identifying emerging infectious diseases and providing essential information for developing and assessing prevention and control efforts. Laboratories play an increasingly vital role in infectious diseases surveillance, as advances in technology continually enhance the specificity of laboratory data and give public health officials new techniques for monitoring emerging infections. Public health officials who spoke with us said that the nation’s surveillance system is essentially sound but in need of improvement. They point to outbreaks rapidly identified and contained as visible indications of the system’s strength. Our survey results tend to support this view: surveillance of five of the six emerging infectious diseases we asked about is widespread among states, and surveillance of four of the six is supported by testing in state public health laboratories. Officials also view CDC’s support as essential and are generally very satisfied with both the types and levels of assistance CDC provides. However, our survey also revealed gaps in the infectious diseases surveillance network. Just over half of the state public health laboratories have access to molecular technology that many experts believe all states could use, and few states require the routine submission of specimens to their state laboratories for testing—a step urged by CDC. In addition, many state epidemiologists believe their surveillance programs do not sufficiently study all infectious diseases they consider important, including antibiotic-resistant conditions and hepatitis C. Both laboratory directors and epidemiologists expressed concerns about the staffing and technology resources they have for surveillance and information sharing. They were particularly frustrated by the lack of integrated information systems within CDC and the lack of integrated systems linking them with other public and private surveillance partners. CDC’s continued commitment to integrating its own data systems and to helping states and localities build integrated electronic data and communication systems could give state and local public health agencies vital assistance in carrying out their infectious diseases surveillance and reporting responsibilities. The lack of a consensus definition of what constitutes an adequate infectious diseases surveillance system may contribute to some of the shortcomings in the surveillance network. For example, state public health officials assert that they lack sufficient trained epidemiologic and laboratory staff to adequately study infectious diseases, as well as sufficient resources to take full advantage of advances in laboratory and information-sharing technology. Without agreement on the basic surveillance capabilities state and local health departments should have, however, it is difficult for policymakers to assess the adequacy of existing resources or to identify what new resources are needed to carry out state and local surveillance responsibilities. Moreover, public health officials make decisions about how to spend federal dollars to enhance state surveillance activities without such criteria to evaluate where investments are needed most. To improve the nation’s public health surveillance of infectious diseases and help ensure adequate public protection, we recommend that the Director of CDC lead an effort to help federal, state, and local public health officials create consensus on the core capacities needed at each level of government. The consensus should address such matters as the number and qualifications of laboratory and epidemiologic staff, laboratory and information technology, and CDC’s support of the nation’s infectious diseases surveillance system. CDC officials reviewed a draft of this report. They generally concurred with our findings and recommendation and provided technical or clarifying comments, which we incorporated as appropriate. Specifically, CDC agreed that a clearer definition of the needed core epidemiologic and laboratory capacities at the federal, state, and local levels would be useful and that integrated surveillance systems are important to comprehensive prevention programs. CDC noted that it is working with other HHS agencies to address these critical areas. We also provided the draft report to APHL and CSTE. APHL officials said the report was comprehensive and articulated the gaps in the current diseases surveillance system well. They also provided technical comments, which we incorporated as appropriate. CSTE officials did not provide comments. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Secretary of HHS, the Director of CDC, the directors of the state epidemiology programs and public health laboratories included in our survey, and other interested parties. We will make copies available to others upon request. If you or your staff have any questions, please contact me or Helene Toiv, Assistant Director, at (202) 512-7119. Other major contributors are included in appendix V. The Chairman of the Subcommittee on Public Health of the Senate Committee on Health, Education, Labor, and Pensions asked us to study the nation’s public health surveillance of emerging infectious diseases, focusing on the contribution of laboratories. This report discusses (1) the extent to which states conduct public health surveillance and laboratory testing of selected emerging infectious diseases, (2) the problems state public health officials face in gathering and using laboratory-related data in the surveillance of emerging infectious diseases, and (3) the assistance CDC provides to states for laboratory-related surveillance and the extent to which state officials consider it valuable. Although laboratories are only one part of the surveillance network, they merit attention because newly developed laboratory technology is an increasingly important means to more quickly identify pathogens and the source of outbreaks. We could describe laboratories’ contributions in more detail only by focusing on a small sample of diseases because the specific contribution of laboratory testing to surveillance varies with each disease. Due to the lack of a consensus definition of the types of public health laboratory testing that should occur and the lack of explicit, widely accepted standards to assess epidemiologic capacity, we were not able to assess the overall adequacy of the nation’s emerging infectious diseases surveillance efforts. We selected—with the assistance of officials from CDC, APHL, CSTE, and the American Society for Microbiology—a sample of six bacterial, viral, and parasitic pathogens that can be identified using laboratory tests and pose nationwide health threats (see table I.1). Our sample includes diseases transmitted by food and water as well as ones that had previously been controlled by the use of antibiotics and vaccines. These diseases affected up to 1.5 million people in the United States in 1996 and caused an unknown number of deaths. The appearance of strains resistant to one or more commonly used antibiotics threatens U.S. efforts to control the spread of tuberculosis. This deadly—often foodborne—group of E. coli first appeared in 1982. No effective treatment exists and infection can result in death or long-term disability. Pertussis is one of the nation’s most commonly reported vaccine-preventable childhood diseases. Incidence is increasing despite high rates of immunization. Cryptosporidium parvum (Cryptosporidiosis) This parasite is frequently found in the nation’s surface and treated water supplies and the risks of low-level exposure from its presence are unknown. The disease it causes has no effective treatment. Identified only in 1988, hepatitis C is a leading cause of chronic liver disease and is the nation’s most common bloodborne infection. Chronic liver disease related to hepatitis C is also the most frequent indication for liver transplantation. S. pneumoniae, a leading cause of death and illness, is rapidly becoming resistant to penicillin, with resistance rates as high as 30 percent of cases in some areas. These six emerging infectious diseases or pathogens are described in more detail in appendix IV. To gather nationwide data on state public health surveillance efforts for the sample of six emerging infections, we surveyed the directors of all state public health laboratories and infectious diseases epidemiology programs that report disease-related information directly to CDC. These include programs in each of the 50 states, the District of Columbia, New York City, and 5 U.S. territories (American Samoa, the Commonwealth of the Northern Mariana Islands, Guam, Puerto Rico, and the Virgin Islands). To develop questions used in our surveys, we reviewed documentation on surveillance and emerging infectious diseases prepared by CDC, professional organizations representing state public health laboratorians and epidemiologists, professional laboratorians, and public health experts. We also spoke with officials and representatives from each of these groups. We worked with officials from professional organizations of public and private laboratories and CDC to judgmentally select a sample of six emerging infections with nationwide significance and to identify appropriate laboratory tests used to generate data for state public health surveillance efforts. We pretested our surveys in person with both laboratory directors and epidemiologists in each of four states and asked knowledgeable people at CDC and in the laboratory and public health fields to review the instruments. We refined the questionnaire in response to their comments to help ensure that potential respondents could provide the information requested and that our questions were fair, relevant, answerable with readily available information, and relatively free of design flaws that could introduce bias or error into our study results. We mailed 57 questionnaires to laboratory directors in April 1998 and 57 questionnaires to epidemiologists in May 1998. We sent at least one follow-up mailing and conducted telephone follow-ups to nonrespondents. We ended data collection in July 1998. At that time, we had received responses from all 57 laboratory directors and from 55 epidemiologists, for response rates of 100 percent and 97 percent, respectively. In preparing for our analysis, we reviewed and edited the completed questionnaires and checked the data for consistency. We tested the validity of the respondents’ answers and comments by comparing them with data we gathered through interviews with public health experts and other public health officials in a total of 30 states and with documentation obtained at CDC and in case study states. We combined responses from epidemiologists and laboratory directors, by state, to analyze for each of our six specific diseases the extent to which state public health laboratories supported state surveillance efforts and the views of epidemiologists and laboratory directors on the adequacy of testing equipment, staff, and the resulting surveillance information. To analyze the extent to which state public health laboratories supported state surveillance efforts, we selected only those states that met the following conditions: for each disease, (1) the state public health laboratory director indicated the laboratory performed tests that generated results used in state surveillance and (2) state epidemiologists indicated that the state conducted surveillance. Using these criteria, we analyzed responses from 54 states. We also conducted on-site work at CDC and in three states—New York, Kentucky, and Oregon. These three states were selected as a nonrandom judgmental sample representing diverse geographic areas and public health surveillance programs. In the three states, we interviewed state and local public health officials as well as other interested groups, including representatives from hospitals, large private clinical laboratories, managed care organizations, and medical associations. At CDC, we interviewed officials responsible for infectious diseases surveillance and laboratories, information systems development, and support services for states. We interviewed officials and obtained documentation to determine how these various programs were organized and how they interacted with other public health and private parties to obtain, analyze, and share disease-related data for surveillance. In addition, we reviewed the general literature on public health surveillance and emerging infectious diseases and interviewed officials from organizations representing state public health laboratory directors, state epidemiologists, state and local public health officials, laboratory professionals, and public health experts. Our work was conducted from December 1997 through December 1998 in accordance with generally accepted government auditing standards. Given the multitude of infectious diseases and varying state surveillance programs, we consulted experts to select a sample of emerging disease threats of nationwide significance. These six conditions are described in greater detail below. E. coli are normal bacterial inhabitants of the intestines of most animals, including humans, where they suppress the growth of harmful bacteria and synthesize vitamins. For reasons not completely understood, a minority of strains cause illness in humans. Shiga-like toxin-producing E. coli are one of five recognized classes of E. coli that cause gastroenteritis in humans. The group derives its name from producing potent toxins, closely related to those produced by Shigella dysenteriae, which cause severe damage to the lining of the intestine. E. coli O157:H7, first identified as a human pathogen in 1982, causes severe abdominal cramping and diarrhea that can become heavily bloody. Although people usually get well without treatment, the illness can be fatal. E. coli O157:H7 is easily killed by heat used in pasteurization and cooking. However, it can live in acid environments. The amount of bacteria needed to cause illness is thought to be low. Three to 5 percent of victims develop hemolytic uremic syndrome (HUS), which is characterized by kidney failure and anemia. Some elderly victims develop thrombotic thromobocytopenic purpura (TTP), consisting of HUS plus fever and neurologic symptoms. Approximately 1 percent of HUS victims die, though many more develop long-term complications. Death rates from TTP can be as high as 50 percent. The disease is often associated with consumption of undercooked ground beef, but sources of contamination are diverse. Recent outbreaks of E. coli O157:H7 have been linked to consumption of contaminated apple juice and cider, raw vegetables such as lettuce, raw milk, and processed foods such as salami. Illness can also be caused by ingesting contaminated water at recreational sites such as swimming pools or spread from child to child in day care settings. For E. coli O157:H7, the estimated annual cost in the United States from the acute and long-term effects of illness and from lost productivity is $302 to $726 million, most of which is due to lost productivity. The number of reported cases fluctuates seasonally, peaking in June though September. Northern states report more cases than southern states. In the Pacific Northwest, E. coli O157:H7 may be second only to Salmonella as a cause of bacterial diarrhea. The true prevalence is unknown and the disease has only recently been added to the list of nationally notifiable diseases. CDC received reports of over 2,741 cases from 47 states in 1996. Despite the high visibility of E. coli O157:H7 due to recent outbreaks, clinicians often do not consider it when diagnosing patients or collect appropriate specimens. Although laboratory testing to detect E. coli O157:H7 is relatively straightforward and inexpensive, a recent study showed that at the end of 1994 only about half of the clinical laboratories in the United States were screening stool samples for it. Tuberculosis, caused by Mycobacterium tuberculosis, was the leading cause of death from infectious diseases in the United States at the turn of the century; it remained the second leading cause of death until the development of antibiotics in the 1950s. Worldwide, about one-third of all people are infected. Tuberculosis kills over 2.9 million people a year—making it a leading cause of death. Tuberculosis of the lungs destroys lung tissue and, if left untreated, half of victims die within 2 years. The risk of contracting the disease is highest in the first year after infection and then drops sharply, although reactivation can occur years later. Only about 10 percent of healthy people infected with the pathogen develop clinical disease. Tuberculosis is difficult to treat, requiring a 6-month regimen of multiple antibiotics to effect a cure and prevent the emergence of antibiotic-resistant strains. When health care is adequate and compliance with treatment is maintained, cure rates should exceed 90 percent, even in those whose immune systems have been compromised by HIV/AIDS. The emergence of strains resistant to one or more antibiotics puts not only tuberculosis patients at risk, but also health care workers, social workers, and any other people in frequent contact with them. For cases of multidrug-resistant tuberculosis, fatality rates can exceed 80 percent for immuno-compromised and 50 percent for previously healthy individuals. Multidrug-resistant cases are extraordinarily difficult to treat, and most patients do not respond to therapy. Tuberculosis is spread primarily by the respiratory route from patients with active disease. Shouting, sneezing, and coughing can easily spread the pathogens in the environment. The risk of transmission varies with the length of exposure, degree of crowding and ventilation, virulence of the strain, and health of the person exposed. From the 1950s through the early 1980s, the incidence of tuberculosis declined in the United States, then began to increase in 1988, reaching a peak in 1992. The HIV/AIDS epidemic, immigration from countries with high rates of tuberculosis, and outbreaks in facilities such as correctional institutions and nursing homes have contributed to the resurgence. Treatment costs for an individual with multidrug-resistant tuberculosis can be as much as $150,000, 10 times the cost of treating a nonresistant case. In 1996, 54 states reported 21,337 cases to CDC. Pertussis, caused by the bacterium Bordetella pertussis, is characterized by uncontrollable spells of coughing in which one cough follows another too quickly to allow a breath in between. An intake of breath that produces a high-pitched “whooping” sound follows each coughing spell, hence the name whooping cough. The illness lasts about 2 weeks and responds to antibiotic therapy. In the early to mid-1900s, pertussis was a common childhood disease and a leading cause of death among children in the United States. Today, pertussis is one of the nation’s most commonly reported childhood vaccine-preventable diseases. Complications associated with pertussis may be severe, especially among infants. Secondary bacterial pneumonia causes most pertussis-related deaths. Other complications include seizures, encephalopathy, and ear infections. About 1 percent of affected infants died in 1993. The risk of complications is highest among infants and under-vaccinated preschool aged children. In 1994, a strain resistant to the antibiotic preferred for treatment appeared in the United States. Immunity to pertussis can decrease with age. Consequently, young adults and adolescents who contract the disease can be an important source in transmitting it to unimmunized infants. Pertussis among adults and adolescents is often not diagnosed by physicians—despite the presence of a persistent cough—because they do not expect to see the disease in this age group. Pertussis is endemic in the United States. Pertussis incidence is cyclical, with peaks every 3 to 4 years. Incidence has decreased from 150 cases per 100,000 population prior to 1940 to about 1.2 cases per 100,000 by 1991. In 1996, 7,796 cases were reported to CDC, an estimated 10 percent of the true number. Although the total number of reported cases remains well below the annual number reported during the pre-vaccine era, the total number of cases has increased steadily in each peak year since 1977. The reasons for the increase in reported cases are unclear but appear unrelated to decreased vaccination rates or reduced vaccine efficacy. Because few pertussis specimens are tested for resistance, the prevalence of antibiotic-resistant strains is unknown. Worldwide, S. pneumoniae infections are among the leading causes of illness and death for young children, individuals with underlying medical conditions, and elderly people. S. pneumoniae is the most common cause of bacterial pneumonia and is implicated in infections of the ears, sinuses, lungs, abdominal cavity, bloodstream, and tissues that envelop the brain and spinal column. A vaccine that controls the 23 most common strains has been available since the 1980s, but it is largely underutilized. In the past, S. pneumoniae uniformly responded to treatment with penicillin, allowing physicians to treat even severely ill patients without testing for antibiotic resistance. During the 1990s, however, resistance to penicillin spread rapidly in the United States, and strains resistant to multiple antibiotics account for a small, but growing, proportion of cases. Case fatality rates—which vary by age, type of infection, and underlying medical condition—can be as high as 40 percent among some high-risk patients, despite appropriate antibiotic therapy. Transmission occurs through contact with infected saliva. In the United States, S. pneumoniae causes up to 3,000 cases of meningitis, 135,000 cases of hospitalized pneumonia, and as many as 7 million ear infections each year. Resistance to penicillin varies widely by region and age group but accounts for 30 percent of cases in some communities. The prevalence of resistance for most areas of the United States is unknown, possibly because the condition was not nationally reportable until 1996. Limited knowledge of local patterns of resistance and the lack of a rapid diagnostic test often result in therapy that uses either unnecessary or overly broad antibiotics, thereby contributing to the development of resistant strains. Cryptosporidiosis, caused by the parasite Cryptosporidium parvum, can affect human intestinal and, rarely, respiratory tracts. The disease has long been known to veterinarians but was first recognized as a human pathogen in 1976. The intestinal disease is generally characterized by severe watery diarrhea and can include abdominal cramps, nausea, vomiting, and low-grade fever. Most healthy individuals recover after 7 to 10 days. Infection of the respiratory tract is associated with coughing and a low-grade fever, often accompanied by severe intestinal distress. Unlike many bacterial infections, the infective dose of cryptosporidiosis is thought to be small, perhaps as few as 10 organisms, each about half the size of a red blood cell. An infected person or animal can shed millions of organisms per milliliter of feces. Once in the environment, the organisms can remain infective for many months. No safe and effective treatment for cryptosporidiosis has been identified. Among persons with weakened immune systems, the disease can lead to dehydration and death. The infectious stage of the parasite is passed in the feces of infected humans and animals. Infection can be transmitted from person to person, from animal to person, through ingesting contaminated food or water, or through contact with fecally contaminated environmental surfaces. The parasite is common among herd animals and is present in virtually all the surface—and much of the treated—waters of the United States. The parasite, small enough to slip through most water filters, is resistant to chlorine treatment. The public health risk of contracting the disease from tap water is unknown. Tests on body fluids indicate as many as 80 percent of the United States population have had cryptosporidiosis. Throughout the world, the organism has been found wherever it was sought. In 1996, 42 states reported 2,426 cases to CDC. The virus that causes hepatitis C was discovered in 1988 and is the major cause of chronic liver disease worldwide. Since 1990, molecular-based laboratory tests have allowed detection of specific antibodies in the blood of infected people. Prior to 1990, diagnosis of hepatitis C was made by excluding both hepatitis A and hepatitis B. The incubation period for acute hepatitis C averages 6 to 7 weeks. Typically, adults and children with acute hepatitis C are either asymptomatic or have a mild clinical illness. More severe symptoms of hepatitis C are similar to those of other types of viral hepatitis and include anorexia, nausea, vomiting, and jaundice. Most patients do not achieve a sustained response to treatment. At least 85 percent of persons infected with hepatitis C develop persistent infection. Chronic disease develops in 60 to 70 percent of infected individuals, and up to 20 percent may develop cirrhosis over a 30-year period. Hepatitis C is a leading cause of chronic liver disease in the United State and a major reason for liver transplants. An estimated 8,000 to 10,000 people die annually from hepatitis C and its related chronic disease. Hepatitis C is most efficiently transmitted through large or repeated contact through the skin with infected blood. Intravenous drug use is the most common risk factor for acquiring hepatitis C. Currently, transfusion-associated hepatitis rarely occurs due to donor screening policies instituted at blood banks and to routine testing of blood donors for evidence of infection. In the United States, the annual number of newly acquired acute hepatitis C infections has ranged from an estimated 180,000 cases in 1984 to an estimated 28,000 in 1995. The prevalence of hepatitis C in the general population is about 1.8 percent, which corresponds to approximately 3.9 million people with chronic infection. Hepatitis C and related chronic diseases cost about $600 million annually (in 1991 dollars). In addition to those named above, the following individuals made important contributions to this report: Linda Bade, Senior Health Policy Analyst; Nila Garces-Osorio, Health Policy Analyst; Julian Klazkin, Attorney; Susan Lawes, Senior Social Science Analyst; and Stan Stenersen, Reports Analyst. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the nation's infectious diseases surveillance network, focusing on the: (1) extent to which states conduct public health surveillance and laboratory testing of selected emerging infectious diseases; (2) problems state public health officials face in gathering and using laboratory-related data in the surveillance of emerging infectious diseases; and (3) assistance that the Department of Health and Human Services' Centers for Disease Control and Prevention (CDC) provides to states for laboratory-related surveillance and the value of this assistance to state officials. GAO noted that: (1) surveillance and testing for important emerging infectious diseases are not comprehensive in all states, leaving gaps in the nation's infectious diseases surveillance network; (2) GAO's survey found that most states conduct surveillance of five of the six emerging infectious diseases GAO asked about, and state public health laboratories conduct tests to support state surveillance of four of the six; (3) over half of the state laboratories do not conduct tests for surveillance of hepatitis C and penicillin-resistant S. pneumoniae; (4) many state epidemiologists believe that their infectious diseases surveillance programs should expand, and they cited a need to gather more information on antibiotic-resistant diseases; (5) just over half of the state public health laboratories have access to advanced molecular technology, which could be valuable to all states' diseases surveillance efforts; (6) few states require the routine submission of specimens or isolated quantities of a pathogen from patients with certain diseases for testing in state laboratories--a step CDC has urged them to adopt to improve the quality of surveillance information; (7) many state laboratory directors and epidemiologists reported that inadequate staffing and information-sharing problems hinder their ability to generate and use laboratory data to conduct infectious diseases surveillance; (8) participants in the surveillance network often lack basic computer hardware or integrated systems to allow them to rapidly share information; (9) many state officials told GAO that they did not have sufficient staffing and technology resources, and public health officials have not agreed on a consensus definition of the minimum capabilities that state and local health departments need to conduct infectious diseases surveillance; (10) this lack of consensus makes it difficult to assess resource needs; (11) most state laboratory directors and epidemiologists placed high value on CDC's testing and consulting services, training, and grant funding and said these services were critical to their ability to use laboratory data to detect and monitor emerging infections; (12) state officials said CDC needs to better integrate its data systems and help states build systems that link them to local and private surveillance partners; and (13) state officials would like CDC to provide more hands-on training experience. |
Federal laws authorize both state and federal entities to protect the Medicaid program from fraud, waste, and abuse. Specifically, various provisions of federal law give CMS the authority to oversee Medicaid program integrity and to set requirements with which state Medicaid programs must comply. As a result, program integrity efforts consist of state and federal activities to detect and deter improper payments— including fraud, waste, and abuse—that range from provider enrollment to post-payment claims review and investigation. Provider enrollment: States screen providers who seek to participate in Medicaid to verify their eligibility. As part of the enrollment process, states must collect certain information from providers, including MCOs, about their ownership interests and criminal background, search exclusion and debarment lists, and take action to exclude those providers who appear on those lists. In some states, MCOs are primarily responsible for enrolling participating providers. Pre-payment review: States conduct prepayment review of claims to ensure appropriateness. Typically, states use payment edits programmed into their claims processing systems to compare claims data in order to approve or deny claims, or flag them for further review. They may also analyze claims data against models of fraudulent behavior to identify potentially fraudulent providers for further investigation. Post-payment claims review: States and Medicaid contractors analyze paid claims, related provider records, and supporting documentation to ensure appropriate utilization, and to identify potential improper payments. These routine reviews may rely on the use of algorithms and data mining to identify potentially improper payments, which are subjected to additional review, including audits. Auditing: Payments to providers are audited to determine compliance with Medicaid billing rules. Investigation: When enrollment, prepayment review, post-payment review, or audits uncover potentially fraudulent claims, states must refer those claims or providers to law enforcement entities for investigation and possible prosecution. Recovery: Once a state has identified and documented improper payments through audit activity, the state generally has one year from the date of a final audit report to recover the overpayment from the provider before reporting the return of the federal share, which can reach up to 100 percent for certain newly enrolled populations under PPACA, to CMS. Federal law requires the state to return the federal share of the overpayment regardless of whether the state was able to recover it, unless the provider has been determined to be bankrupt or out of business. A variety of entities are engaged in Medicaid program integrity activities. States have primary responsibility for reducing, identifying, and recovering improper payments. Federal entities typically provide oversight, as well as program and law enforcement support. Figure 1 illustrates the various entities, both federal and non-federal, that are involved in Medicaid program integrity. See, e.g., 42 U.S.C. §§ 1396a(a)(69), 1396u-6. PI units for corrective action and potential fraud cases to the state’s MFCU. In addition, states are now required to contract with recovery audit contractors (RAC) to identify under- and over-payments as part of their program integrity activities. CMS oversees state Medicaid programs by providing states with guidance related to statutory and regulatory requirements, as well as technical assistance on specific program integrity activities such as data- mining. The DRA increased the federal government’s role by establishing the Medicaid Integrity Program to support and oversee state program To carry out these responsibilities, CMS established the integrity efforts. Medicaid Integrity Group, which conducts comprehensive reviews of state Medicaid program integrity activities to assess these activities and the state’s compliance with federal program integrity laws. In addition, the Medicaid Integrity Group works with MICs who review and audit Medicaid claims. The Medicaid Integrity Group also provides training to state program integrity staff through its Medicaid Integrity Institute. CMS also collects information from states on their recoveries of overpayments; however, we recently reported that most states were not fully reporting recoveries and recommended that CMS should increase efforts to hold states accountable for reliably reporting program integrity recoveries to ensure that states are returning the federal share of recovered overpayments. HHS-OIG oversees Medicaid program integrity through its audits, investigations, and program evaluations. It is also responsible for enforcing certain civil and administrative health care fraud laws. In addition, the HHS-OIG oversees the MFCUs, assessing their compliance with statutes, regulations, and HHS-OIG policy. HHS-OIG is also responsible for assessing MFCU performance and recommends program improvements where appropriate. States have traditionally provided Medicaid benefits using a fee-for- service system, where health care providers are paid for each service. However, according to CMS, in the past 15 years, states have more frequently implemented a managed care delivery system for Medicaid benefits. In a managed care delivery system, beneficiaries obtain some portion of their Medicaid services from an organization under contract with the state, and payments to MCOs are typically made on a predetermined, per person per month basis. In contrast, under the traditional fee-for-service delivery system, health care providers are paid for each unit of service delivered. Two-thirds of Medicaid beneficiaries now receive some of their services from MCOs, and many states are expanding their use of managed care to additional geographic areas and Medicaid populations. Nationally, approximately 27 percent, or $74.7 billion, of federal Medicaid expenditures in fiscal year 2011 were attributable to Medicaid managed care which, according to HHS, included the 57 percent of Medicaid beneficiaries who were enrolled in Medicaid MCOs as of July 1, 2011. States oversee MCOs that provide care to Medicaid beneficiaries through contracts and reporting requirements, which may include identifying improper payments to providers within their plans. In addition, CMS has developed requirements for states and MCOs to protect against fraud and abuse in Medicaid managed care. Among other things, CMS requires MCOs to implement compliance plans, train MCO employees, and monitor payments. Most state and federal program integrity officials we interviewed told us that they did not closely examine Medicaid managed care payments, but instead primarily focused their program integrity efforts on FFS claims. Moreover, federal entities have taken few steps to address Medicaid managed care program integrity. State PI unit officials from five of the seven states in our study and MFCU officials from four of the study states told us they primarily focus their program integrity efforts on Medicaid FFS claims. These officials said they have not begun to closely examine program integrity in Medicaid managed care, which is a growing portion of overall Medicaid State PI units and MFCUs are responsible for ensuring expenditures.Medicaid program integrity, part of which includes monitoring managed care program integrity. Each of the seven states included in our review had more than 60 percent of beneficiaries enrolled in managed care as of July 1, 2011, and expenditures attributable to managed care in the seven states varied, ranging from 18 to 38 percent of their total Medicaid spending in fiscal year 2011. PI unit officials from the seven states described differing levels of complexity in conducting Medicaid managed care program integrity activities, as shown in the examples that follow. At the most sophisticated level, PI unit officials from two of the seven states we spoke with told us they examined payments to MCO plans and providers to identify improper payments, conducted meetings with MCOs to discuss provider audits and investigations, and used data analytics to identify aberrant patterns among MCO providers. They also conducted independent audits of payments to MCO plans. PI unit officials in the remaining five states told us that they were still in the early stages of shifting the focus of program integrity efforts to Medicaid managed care, and thus reported more limited actions. PI unit officials from three of these five states told us they examined Medicaid managed care providers for improper payments and fraud by reviewing MCO reporting on improper payments. PI unit officials from the remaining two states told us they did not examine MCO encounter data, and one of these states told us they do not perform audits of MCO providers or actively search for fraudulent activities. MCOs have responsibility for identifying improper payments to providers within their plans; however, state officials suggested that MCOs might not have an incentive to identify and recover improper payments. Officials from two of the seven state PI units we spoke with told us that they believed MCOs were not consistently reporting improper payments to the state to avoid appearing vulnerable to fraud and abuse. Further, officials from three PI units described a potential conflict of interest because when MCOs report improper payment recoveries, future capitation rates could be reduced because of any improper payments identified. For example, officials from four PI units said their states account for improper payment recoveries and explained that it negatively impacts the MCO plans’ rates for the following year. State officials we spoke with told us that one reason they have not focused on managed care program integrity is that MCO plan and provider audits and investigations are more complex than those in the FFS model. Similarly, almost all of the state MFCU officials we spoke with told us that extra effort was required to obtain detailed managed care claims data. While most states have access to managed care encounter data, states must rely on MCO plans to provide actual dollar amounts of claims, which are needed to audit and investigate providers and determine the amounts of overpayments. Obtaining the data from each MCO could require significant time and effort, which may hamper audits and investigations, particularly in states with several MCO plans. For example, according to CMS, as of July 1, 2011, four of the seven states included in our review had 20 or more MCOs operating in their state. State officials also told us that in order to be effective, PI unit and MFCU staff needed specialized training and federal support in the form of updated regulations, guidance, and technical assistance. For example, state program integrity staff from two states attending the 2013 National Association of Medicaid Program Integrity conference suggested that one way that CMS could enhance its assistance to states would be to redirect its Medicaid integrity contractors to focus their audit activities on managed care payments. Such an arrangement could help identify possible patterns or vulnerabilities across states and assist states as they work to acquire the necessary expertise in managed care program integrity. Without closer examination of Medicaid managed care payments, state PI units and MFCUs have limited ability to identify improper payments made to MCOs. They are also unable to ensure that MCOs are taking appropriate actions to identify, prevent, or discourage improper payments to providers. PPACA is expected to significantly expand the Medicaid program, with many of the new beneficiaries being enrolled in managed care and covered almost entirely by federal funds in 2014 through 2016. Considering managed care’s growing share of federal Medicaid expenditures, Medicaid managed care is an area where program integrity activities are of growing importance to ensure the protection of federal dollars. Similar to states, federal entities—CMS and HHS-OIG—have taken few steps to address Medicaid managed care program integrity. For example, CMS officials told us that states have primary responsibility for direct oversight of MCOs’ compliance with program integrity requirements. CMS provides on-going support and guidance to states regarding their managed care programs, including review and approval of states’ managed care waivers and contracts, as well as assessment and guidance regarding program integrity in Medicaid managed care. For example, CMS’s comprehensive reviews examine state compliance with federal regulations governing managed care contracts, such as ensuring that MCOs disclose certain ownership and control information. However, according to CMS officials, the comprehensive reviews do not require or check to ensure that states are conducting more in-depth managed care program integrity activities, such as audits of managed care claims. In 2000, CMS issued Medicaid managed care program integrity guidance to states; as of Nov. 18, 2013, however, this guidance was not available on the CMS website, and six of the seven state PI unit officials we spoke with did not mention it when asked about the guidance they relied on in CMS officials said they were conducting program integrity activities.updating this guidance, but did not have a timeline for its completion. According to CMS officials, states have requested additional support in ensuring managed care program integrity, and CMS has taken some actions to provide additional assistance. CMS officials said that states, including those with significant managed care experience, are still trying to understand their roles and responsibilities in overseeing managed care program integrity. In 2013, CMS offered two training sessions on Medicaid managed care program integrity through its Medicaid Integrity Institute. CMS officials told us that the sessions were well attended by PI unit staff and other Medicaid program staff. We reviewed the presentation materials, which provided extensive information regarding Medicaid managed care program integrity practices and strategies. Presentation materials are accessible to PI staff who did not attend the training if they register as users through the Medicaid Integrity Institute website. States and CMS ensure Medicaid program integrity by preventing, detecting, and recovering improper payments. Specifically, various provisions of the SSA authorize CMS to develop requirements for the proper and efficient operation of the Medicaid program, including reporting requirements and methods for prepayment and post-payment review. In addition, the SSA requires CMS to audit Medicaid claims, including cost reports and payments to MCOs and requires MCO contracts to contain provisions giving HHS and states audit and access authority over MCOs and their subcontractors. This contractual requirement also appears in CMS’s regulations, although CMS does not require states to conduct such audits. Moreover, PPACA required state Medicaid programs to establish contracts consistent with state law and similar to the contracts established for the Medicare RAC program, and required CMS to coordinate this expansion with the states and to promulgate implementing regulations. CMS subsequently issued guidance to the states and a regulation implementing the Medicaid RAC program; however, this regulation allowed states to exclude Medicaid In comments managed care claims from review by Medicaid RACs.accompanying the final rule, issued in 2011, CMS indicated that it might require Medicaid RACs to review managed care claims during future rule- making, once a permanent Medicare managed care RAC program was fully operational or a viable state Medicaid model had been identified. During a February 2014 interview with CMS officials, the officials reiterated that they were open to revisiting the issue of whether the Medicaid RAC program should cover MCO claims, but the officials did not provide any specific details regarding how or when this might be accomplished. The need for CMS leadership on program integrity efforts in managed care is particularly important, given that some states’ expansions of their Medicaid programs under PPACA may be accomplished through managed care arrangements. Until CMS takes steps to ensure the integrity of Medicaid managed care, state and federal Medicaid dollars remain vulnerable to fraud, waste, and abuse. The HHS-OIG has noted the emergence of MCO fraud among recent Medicaid fraud trends, citing the increase in the agency’s workload on MCO fraud cases. During the 2013 National Association of Medicaid Program Integrity conference, the agency presented on the challenges associated with identifying different types of plan-based fraud schemes, some of which result in inflated payments to MCOs. While CMS does audit states’ payments to MCOs to verify that the state is paying the capitated rates specified in the MCO’s contract, CMS does not require states to audit the appropriateness of these payments, to ensure, for example, that these payments do not include improper payments by plans. In addition to plan-level fraud, the agency has noted the need for more emphasis on analyzing potential provider fraud in Medicaid managed care. On June 1, 2012, HHS-OIG issued updated performance standards directing MFCUs to take steps to ensure that state Medicaid agencies, MCOs, and other agencies refer suspected provider fraud cases to the MFCU. Additionally, the updated performance standards direct MFCUs to ensure their caseload mix reflects the proportion of Medicaid beneficiaries enrolled in managed care. As of March 10, 2014, HHS-OIG had published three MFCU evaluations that used the new performance standards. In one MFCU evaluation, HHS-OIG found that 55 percent of the state’s Medicaid beneficiaries were enrolled in MCOs, but the state’s MFCU only opened one case involving an MCO during the two year review period. The other two state MFCU evaluations did not mention managed care, although the states’ MFCUs did not open or close any managed care provider cases during the review period. According to CMS officials, as of July 1, 2011, the two states had almost 78 percent and nearly 60 percent of their Medicaid beneficiaries enrolled in MCOs. The involvement of multiple state and federal entities in similar activities— post-payment reviews, audits, and investigations—has resulted in fragmented program integrity activities. Typically, as we have found in past work, coordinating activities can alleviate many of the problems created by fragmentation, allowing entities to avoid unnecessary duplication and overlap. State program integrity officials we interviewed told us that coordination efforts helped them avoid unnecessary duplication, but presented additional challenges. Post-payment review activities are primarily led by states’ PI units, which can include their SURS and RACs. Other state entities, such as state auditors’ offices and other divisions in the state Medicaid agency may also participate in post-payment review activities. PI units coordinate these activities by (1) delegating specific data-mining targets to specific entities to avoid overlap, or (2) coordinating data-mining activities to ensure that the different entities are not duplicating each other’s efforts to identify improper payments. For example, in four of the five states that had signed contracts with RACs, PI unit officials told us they require that before starting a data-mining project, the RAC must submit its plan to the PI unit for approval.data-mining activities to ensure that the RAC will not be duplicating the activities of other entities. Additionally, officials from one of these PI units told us they participate in a monthly meeting with the SURS, RAC, ZPIC, state auditor’s office, and MFCU to discuss current data mining projects, and to decide who is best able to handle specific cases. Then, the PI unit checks the plan against other Although multiple state entities conduct post-payment review activities, their activities are not necessarily duplicative. State PI unit officials told us that the purposes of these reviews vary. For example, some PI unit officials told us other divisions within the state Medicaid agencies will use data mining to examine quality of care or clinical oversight issues. SURS post-payment reviews can include ensuring compliance with Medicaid payment policies. The involvement of multiple federal and state entities in audits leads to fragmentation. PI units, RACs, MICs, HHS-OIG—and in some states, the state auditor’s office—perform audits. PI unit officials told us they take the lead in coordinating these audit activities to minimize overlap and duplication, as well as expand the types of providers and health care areas subject to review. For example, five of the states we selected for this study had signed contracts with RACs, and PI unit officials from these states told us they direct the RACs to focus audits on specific areas to avoid duplicating other efforts. For example, PI unit officials in one state noted that the RAC asked and was granted permission to examine home health claims, which was an area where the PI unit had been unable to focus. In some cases, PI units also coordinated with MICs on collaborative audits. For these audits, PI units typically identified audit targets using state claims data, and the MIC performed the audit. Officials from one state told us that collaborative audits with the MIC in their state had reviewed over $200 million in claims, from which the state expects some recoveries. With regard to fraud, fragmentation exists, in part, because multiple law enforcement entities may be responsible for the investigation of fraudulent claims. For example, fraud schemes may cross state lines thereby necessitating the involvement of multiple law enforcement entities. MFCUs have the primary responsibility for fraud investigations and coordinate with other state and federal law enforcement entities, including the HHS-OIG, U.S. Attorney’s Offices, the Federal Bureau of Investigation, and state attorneys general. MFCUs coordinate with these entities to prevent duplicative investigations and to share knowledge on potential fraud schemes. MFCU officials from all of the states included in our review said they have regular meetings with federal entities to discuss cases, and officials from three MFCUs said they work cases with federal entities. HHS-OIG officials also said they work jointly with MFCUs and other entities to prevent a scenario where both entities would be conducting separate but duplicative investigations. To prevent duplication between fraud investigations and improper payment audits, all PI unit and MFCU officials we spoke with said they meet regularly to discuss current investigations and audits to ensure they are not pursuing the same target. All the MFCUs and PI units from the states that we reviewed had a memorandum of understanding, which describes the entities’ relationship, consistent with federal regulation. Additionally, all of these entities said they seldom or never worked on cases involving the same provider at the same time. MFCU officials said that coordination helps with investigations because a PI unit pursuing a separate payment recovery action against a suspect provider could interfere with a criminal investigation. Coordination among PI units and MFCUs also allows the entities to share limited resources as the need arises. For example, MFCUs typically do not have clinical staff on hand, but can rely on the clinical expertise of PI units’ and state Medicaid agencies’ staff. MFCU officials from four states told us that coordination among entities helped improve their cases by better leveraging resources. Officials from one MFCU said their coordination meetings allow the entities to decide who is best suited to handle specific cases. Officials from two MFCUs said that working with other entities allows the MFCU to give assistance to or receive assistance from these other entities on investigations and executing warrants. Officials from another MFCU told us that other entities can take on cases that the MFCU would not have the authority to prosecute on their own. However, our previous work has shown that measuring the results of health care fraud investigations is difficult due to several factors. These factors include the difficulties of establishing a health care fraud baseline to determine whether the amount of fraud has changed over time, quantifying the effect of investigation and prosecution on deterring fraud, and establishing a causal link between the work and changes in health care fraud. Overall, PI unit officials generally described coordination efforts among program integrity offices positively. Officials from six of the seven PI units generally described their coordination efforts as requiring minimal resources and in some cases suggested that coordination improved program integrity functions, allowing the state to recover additional overpayments. For example, coordination among PI units and other entities—such as RACs or MFCUs—can allow entities to share limited resources and expertise. Officials from five MFCUs said their states’ PI units will provide education to MFCU staff on various aspects of, and changes to, the Medicaid program. Officials from one PI unit said that MIC staff provided clinical expertise that was not available within the PI unit. However, some of the officials with whom we spoke said that coordination efforts have sometimes proven to be problematic. For example, officials from three PI units described challenges with some types of collaborations. Officials from one PI unit told us they have to expend resources to address inappropriate audit findings from other entities. For example, in some cases, the MIC had pursued audit findings that the PI unit was not able to successfully support in court. These officials also told us that due to the number of entities involved, the audit coordination process was somewhat convoluted and caused delays in the audit process. Officials from a second PI unit said collaborative audits, which were initially put into place by the Medicaid Integrity Group to enhance collaboration between states and MICs, were not useful. The officials noted that years of working with the MIC on one project have generated less than $1,000 in findings in their state. Officials from a third PI unit told us that they spent time directing RAC activities in order to steer the RAC away from unproductive audits. The PI unit officials said the RACs sometimes pursue audits that result in findings that are difficult to prove and can harm relations with providers. Additionally, officials from three PI units—including two of the above— said they would prefer to handle the work of RACs or MICs on their own or otherwise consolidate this work, if resources were available. PI units and MFCU officials told us that their organizations coordinate their activities with multiple entities to avoid unnecessary duplication; however, the results of these coordinated efforts have been mixed. Despite the combined efforts of various program integrity entities, our previous work has found that some states appear to be recovering only a small portion of estimated improper payments. GAO identified a gap between state and federal efforts to ensure Medicaid managed care program integrity. Federal laws require the states and CMS to ensure the integrity of the Medicaid program, including payments under Medicaid managed care. However, most of the state PI units and MFCUs included in our review were not closely examining the activities of MCOs citing a lack of sufficient guidance and support. For example, CMS does not require states to audit the appropriateness of payments to MCOs to ensure payments have not been improperly inflated, nor does CMS require states to include review of payments to MCO providers as part of their Medicaid RAC programs. However, CMS has largely delegated managed care program integrity activities to the states. Without adequate federal support and guidance on ways to prevent or identify improper payments in a managed care setting, states are neither well-positioned to identify improper payments made to MCOs, nor are they able to ensure that MCOs are taking appropriate actions to identify, prevent, or discourage improper payments. Such efforts take on greater urgency as states that choose to expand their Medicaid programs under PPACA are likely to do so with managed care arrangements, receiving a 100 percent federal match for newly eligible individuals from 2014 through 2016. Unless CMS takes a larger role in holding states accountable, and provides guidance and support to states to ensure adequate program integrity efforts in Medicaid managed care, the gap between state and federal efforts to monitor managed care program integrity leaves a growing portion of federal Medicaid dollars vulnerable to improper payments. Program integrity activities are fragmented across multiple state and federal entities. If not carefully coordinated, these fragmented activities could result in additional overlap and unnecessary duplication. State PI units and MFCUs we reviewed coordinate with one another and federal entities to avoid duplication, but their coordination efforts present both benefits and challenges. As implemented across the states, newer program integrity efforts—such as RACs and MICs—may improve states’ efforts to identify and recover improper payments; however, they will also increase the need for coordination to ensure maximum program coverage and minimum duplication and overlap of program integrity activities. Given that combined federal and state efforts have recovered only a small portion of the estimated improper payments, it will be important to continue to monitor federal and state program integrity efforts in Medicaid as a means of assessing whether the current structure is effective. In order to improve the efficiency and effectiveness of Medicaid program integrity efforts, we recommend that the Administrator of CMS take the following three actions: 1. hold states accountable for Medicaid managed care program integrity by requiring states to conduct audits of payments to and by managed care organizations; 2. update CMS’s Medicaid managed care guidance on program integrity practices and effective handling of MCO recoveries; and 3. provide the states with additional support in overseeing Medicaid managed care program integrity, such as the option to obtain audit assistance from existing Medicaid integrity contractors. We provided a draft of this report to HHS for comment. In its written comments, HHS stated that the Department concurred with two of our recommendations, and stated that our first recommendation— to hold states accountable for Medicaid managed care program integrity by requiring states to conduct audits of payments to and by managed care organizations—was unclear. In response to this recommendation, HHS listed current CMS activities that the Department believes address the first recommendation. These activities include the audits under the PERM program, the adoption of regulations requiring MCOs to have fraud and abuse compliance plans and to provide in their contracts for HHS and state audit and access authority. While the activities described by HHS support states in ensuring other aspects of Medicaid program integrity, they do not require states to conduct audits to ensure the appropriateness of payments to MCOs or payments by MCOs and therefore do not achieve the goal of our recommendation. Taking this additional step, particularly in combination with additional guidance and audit assistance, would help ensure that payments to MCO plans are appropriate and that providers within MCOs are also consistently reviewed, thus helping ensure the integrity of the Medicaid program. HHS agreed with our recommendation to update CMS’s Medicaid managed care guidance on program integrity practices and effective handling of MCO recoveries. HHS stated that CMS is consulting with federal and state partners regarding strategies to improve Medicaid program integrity, and plans to address changes in future rulemaking or other guidance. Additionally, HHS agreed that states could benefit from additional Medicaid managed care guidance, particularly regarding improper payment recoveries, and that CMS would consider issuing guidance to states regarding the handling of overpayment recoveries. HHS also agreed with our recommendation that CMS provide states with additional support in overseeing Medicaid managed care program integrity, such as the option to obtain audit assistance from existing Medicaid integrity contractors. HHS stated that CMS currently offers assistance to states, including guidance in the use of tools for managed care program integrity. HHS also said that in 2014 CMS will conduct special in-depth reviews focused on managed care program integrity activities in selected states that are expanding the use of managed care. Additionally, HHS stated that CMS is working with two states with considerable managed care experience to develop a model for managed care audits for all states. HHS’s comments are reproduced in appendix I. HHS also provided technical comments, which we incorporated as appropriate. We also provided an extract of this report to the state PI units and MFCUs that we selected for interviews. We incorporated their technical comments as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of CMS, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in Appendix II. In addition to the contact named above, key contributors to this report were: Tom Conahan, Assistant Director; Matthew Gever, Drew Long, Jasleen Modi, Dawn Nelson, and Jennifer Whitworth. | In fiscal year 2013, the Medicaid program covered about 71.7 million individuals at a cost of $431.1 billion, of which CMS estimated that $14.4 billion (5.8 percent) were improper payments. Multiple state and federal entities are involved in program integrity efforts, such as payment review, auditing, and investigating fraud. GAO was asked to examine how these entities ensure comprehensive Medicaid program integrity. This report examines state and federal roles and responsibilities to identify potential (1) gaps in efforts to ensure Medicaid program integrity coverage; and (2) fragmentation, overlap, or duplication of program integrity efforts, and efforts to coordinate activities. GAO examined relevant federal laws and regulations, CMS guidance, and state program integrity reviews. GAO also interviewed officials from CMS and HHS's Office of Inspector General, as well as PI unit and MFCU officials from seven states. GAO identified a gap in state and federal efforts to ensure Medicaid managed care program integrity. Federal laws require the states and the Centers for Medicare & Medicaid Services (CMS) to ensure the integrity of the Medicaid program, including payments under Medicaid managed care, which are growing at a faster rate than payments under fee-for-service (FFS). However, five state program integrity (PI) units and four Medicaid Fraud Control Units (MFCU) from the seven states included in GAO's review said they primarily focus their efforts on Medicaid FFS claims and have not begun to closely examine program integrity in Medicaid managed care. In addition, federal entities have taken few steps to address Medicaid managed care program integrity. CMS, the federal agency within the Department of Health and Human Services (HHS) that oversees Medicaid has largely delegated managed care program integrity oversight activities to the states, but has not updated its program integrity guidance since 2000. Additionally, CMS does not require states to audit managed care payments, and state officials GAO interviewed said they require additional CMS support, such as additional guidance and the option to obtain audit assistance from existing Medicaid integrity contractors in overseeing Medicaid managed care program integrity. The involvement of multiple entities in conducting post-payment reviews, audits, and investigations has resulted in fragmented program integrity efforts; yet the effects of fragmentation are unclear. As GAO has found in past work, coordinating activities can alleviate many problems created by fragmentation, thus allowing entities to avoid unnecessary duplication and overlap. Most of the program integrity officials from the seven states GAO included in this review said that coordination efforts helped them manage overlap and avoid unnecessary duplication; however some officials said that coordination presented additional challenges for time and staff resources. Given that combined federal and state efforts have recovered only a small portion of the estimated improper payments, continued monitoring of federal and state program integrity efforts in Medicaid will be an important means of assessing whether the current structure is effective. Because of the gap GAO identified between state and federal program integrity efforts in managed care, neither state nor federal entities are well positioned to identify improper payments made to managed care organizations (MCOs), nor are they able to ensure that MCOs are taking appropriate actions to identify, prevent, or discourage improper payments. Improving federal and state efforts to strengthen Medicaid managed care program integrity takes on greater urgency as states that choose to expand their Medicaid programs under the Patient Protection and Affordable Care Act are likely to do so with managed care arrangements, and will receive a 100 percent federal match for newly eligible individuals from 2014 through 2016. Unless CMS takes a larger role in holding states accountable, and provides guidance and support to states to ensure adequate program integrity efforts in Medicaid managed care, the gap between state and federal efforts to monitor managed care program integrity will leave a growing portion of federal Medicaid dollars vulnerable to improper payments. GAO recommends that CMS increase its oversight of program integrity efforts by requiring states to audit payments to and by MCOs; updating its guidance on Medicaid managed care program integrity; and providing states additional support for managed care oversight, such as audit assistance from existing contractors. In its comments, HHS asked for clarification on the first recommendation and concurred with the other two. In response, GAO clarified its first recommendation—that CMS take the added step of requiring states to audit the appropriateness of payments to and by MCOs to better ensure Medicaid program integrity. |
Although more than 70 federal agencies have foreign language needs, some of the largest programs are concentrated in the Army, the State Department, the Central Intelligence Agency, and the Federal Bureau of Investigation. Office of Personnel Management (OPM) records indicate that the government employs just under a thousand translators and interpreters in the job series reserved for this group. The government also employs tens of thousands of individuals who use foreign language skills in positions such as FBI special agents and legal attachés, State Department Foreign Service officers, and Department of Commerce Foreign Commercial Service (FCS) officers. For the four agencies we reviewed, a total of nearly 20,000 staff are employed in positions that require some foreign language proficiency. Agency management of these resources takes place against the backdrop of an emerging federal issue—strategic human capital management. The foreign language staffing and proficiency shortfalls we discuss in our report can be seen as part of a broader pattern of human capital weaknesses and poor workforce planning that has impacted the operations of agencies across the federal government. In fact, GAO recently designated human capital management as a governmentwide high-risk area on the basis of specific problem areas identified in prior GAO reports.For example, GAO previously testified that the Department of Defense faces looming shortages of intelligence analysts, computer programmers, and pilots. In a subsequent report on trends in federal employee retirements, we found that relatively large numbers of individuals in key math and science fields will be eligible to retire by the end of fiscal year 2006: These include physics (47 percent); chemistry (42 percent); computer specialists (30 percent); and electronics and electrical engineering (27 percent and 28 percent, respectively). In response to these risks, the administration, the Office of Management and Budget (OMB), OPM, and GAO have issued guidance on how agencies can begin the process of strategically managing their staffing resources. For example, OPM has developed a five-step workforce planning model that outlines the basic tenets of effective workforce planning. The president and OMB’s guidance stresses that agencies should seek to address shortages of skills by conducting thorough workforce analyses, by using existing personnel flexibilities available to federal agencies, and by identifying additional authorities or flexibilities they might need to remove current obstacles and barriers to effective workforce management. GAO guidance emphasizes the use of a self-assessment checklist for better aligning human capital with strategic planning and core business practices. Officials in the four agencies we reviewed reported varied types and degrees of foreign language shortages depending on the agency, job position, language, and skill level. They noted shortages of translators and interpreters and people with skills in specific languages, as well as a shortfall in proficiency level among people who use foreign language skills in their jobs. The Army’s greatest foreign language needs were for translators and interpreters, cryptologic linguists, and human intelligence collectors. The State Department has not filled all of its positions requiring foreign language skills. And, although the Foreign Commercial Service has relatively few positions that require foreign language proficiency, it had significant shortfalls in personnel with skills in six critical languages. While the FBI does not have a set number of positions for its special agent linguists, these agents must have some level of foreign language proficiency that they can use in conducting investigations. (When identified by language, FBI staffing and proficiency data are classified and are discussed in the classified report mentioned earlier.) While our report provides detailed staffing and proficiency shortfall data for four agencies, I would like to use the data we obtained for the U.S. Army to illustrate the nature and extent of some of these shortfalls. The Army provided us data on translator and interpreter positions for six languages it considers critical: Arabic, Korean, Mandarin Chinese, Persian- Farsi, Russian, and Spanish (our analysis excluded Spanish because the Army has a surplus of Spanish language translators and interpreters). As shown in table 1, the Army had authorization for 329 translator and interpreter positions for these five languages in fiscal year 2001 but only filled 183 of them, leaving a shortfall of 146 (44 percent). In addition to its needs for translators and interpreters, the Army also has a need for staff with applied language skills. We obtained detailed information on two key job series involving military intelligence— cryptologic linguists and human intelligence collectors. As shown in table 2, the Army had a shortfall of cryptologic linguists in two of the six foreign languages it viewed as most critical—Korean and Mandarin Chinese. Overall, there were 142 unfilled positions, which amounted to a 25 percent shortfall in cryptologic linguists in these two languages. The Army also had a shortfall of human intelligence collectors in five of the six foreign languages it viewed as most critical in this area—Arabic, Russian, Spanish, Korean, and Mandarin Chinese.Overall, there were 108 unfilled positions, which amounted to a 13 percent shortfall in these five languages. The greatest number of unfilled human intelligence collector positions was in Arabic, but the largest percentage shortfall was in Mandarin Chinese. Table 3 provides data on these shortfalls, by language. The shortages that agencies reported can have a significant impact on agency operations. Although it is sometimes difficult to link foreign language skills to a specific outcome or event, foreign language shortages have influenced some agency activities. Here are a few examples: The Army has noted that a lack of linguists is affecting its ability to conduct current and anticipated human and signal intelligence missions. As a result, the Army said that it does not have the linguistic capacity to support two concurrent major theaters of war. The need for Spanish speakers has been an issue in pursuing Florida health care fraud cases. The assistant U.S. attorney in Miami in charge of health care fraud investigations recently advised the FBI that his office would decline to prosecute health care fraud cases unless timely translations of Spanish conversations were available. This situation has important implications, since the Miami region has the nation’s largest ongoing health care fraud investigation. The FBI estimates that Medicare and Medicaid losses in the region are in excess of $3 billion. The FBI’s Los Angeles office has also cited a critical need for Spanish language specialists and language monitors for cases involving violent gang members. According to the Bureau, being able to target these gang members will save lives in Los Angeles but is contingent on the availability of Spanish linguists to assist with these investigations. The need for foreign language speakers has hindered State Department operations. The deputy director of the State Department's National Foreign Affairs Training Center recently testified on this topic. She said that shortfalls in foreign language proficiency have contributed to a lack of diplomatic readiness. As a result, the representation and advocacy of U.S. interests abroad has been less effective; U.S. exports, investments, and jobs have been lost; and the fight against international terrorism and drug trafficking has been weakened. Finally, the lack of translators has thwarted efforts to combat terrorism. For instance, the FBI has raised concern over the thousands of hours of audio tapes and pages of written material that have not been reviewed or translated due to a lack of qualified linguists. Our second objective was to examine federal agencies’ strategies to address these foreign language shortages. The agencies we reviewed are pursuing three general strategies to meet their foreign language needs. First, agencies are focusing on staff development by training staff in foreign languages, providing pay incentives for individuals using those skills, and ensuring an attractive career path for linguists or language-proficient employees. Second, agencies are making use of external resources. This effort can include contracting staff as needed; recruiting native or U.S.- trained language speakers; or drawing on the expertise of other agency staff, reservists, or retirees. Third, several agencies have begun to use technology to leverage limited staff resources, including developing databases of contract linguists, employing language translation software, and performing machine screening of collected data. Figure 1 provides an overview of these categories and related strategies. While these assorted efforts have had some success, current agency strategies have not fully met the need for some foreign language skills, as evidenced by the continuing staffing and proficiency shortfalls that each agency we reviewed faces. Limited Progress Made Our third objective was to analyze federal agencies’ efforts to implement an on Workforce Planning overall strategic workforce plan to address current and projected foreign language shortages. To help fill existing skills shortages, some agencies have begun to adopt a strategic approach to human capital management and workforce planning. As I mentioned earlier, OPM has issued a workforce planning model that illustrates the basic tenets of strategic workforce planning. We used this model to assess the relative maturity of workforce planning at the four agencies we reviewed. As shown in figure 2, this model suggests that agencies follow a five-step process that includes setting a strategic direction, documenting the size and nature of skills gaps, developing an action plan to address these shortages, implementing the plan, and evaluating implementation progress on an ongoing basis. This is a model that could be used to guide workforce planning efforts as they relate to other skills needed in the federal government such as math, science, and information technology. We found that the FBI has made an effort to address each of the five steps in OPM’s model. For instance, the FBI has instituted an action plan that links its foreign language program to the Bureau's strategic objectives and program goals. This action plan defines strategies, performance measures, responsible parties, and resources needed to address current and projected language shortages. We found that the FBI’s work in the foreign language area was supported by detailed reports from field offices that documented the Bureau’s needs. The FBI reviewed these reports along with workload statistics from its regional offices. FBI officials noted that implementation progress is routinely tracked and adjustments to the action plan are made as needed. In contrast, the other three agencies have yet to pursue this type of comprehensive strategic planning and had only completed some of the steps outlined in OPM’s planning model. The Army has limited its efforts to developing a plan partially outlining a strategic direction and identifying its available supply and demand for staff with foreign language skills (addressing only steps 1 and 2 of the OPM model). The State Department has not yet set a strategic direction for its language program; however, the department has addressed step 2 in the workforce planning model through its annual survey of ambassadors regarding foreign language needs at their posts on a position-by-position basis. State has yet to develop an action plan and the related implementation and monitoring steps described in OPM’s model. Finally, the status of the Foreign Commercial Service’s language program closely mirrored the situation we found at the State Department. One difference, however, is that the agency surveys senior officers regarding a post’s foreign language needs every 3 years instead of annually. Another difference is that FCS officials indicated that they have begun a workforce planning initiative that is designed to address the key components outlined in the OPM model. In closing, I would like to note that foreign language shortages have developed over a number of years. It will take time, perhaps years, to overcome this problem. Effective human capital management and workforce planning, however, offer a reasonable approach to resolving such long-standing problems. Mr. Chairman and members of the Subcommittee, this concludes my prepared statement. I will be happy to answer any questions the Subcommittee members may have. | Federal agencies' foreign language needs have increased during the past decade because of increasing globalization and the changing security environment. At the same time, agencies have seen significant reductions-in-force and no-growth or limited-growth environments during the last decade. As a result, some agencies now confront an aging core of language-capable staff while recruiting and retaining qualified new staff in an increasingly competitive job market. The four agencies GAO reviewed reported shortages of translators and interpreters and other staff, such as diplomats and intelligence specialists, with foreign language skills. These shortfalls varied depending on the agency, job position, language, and skill level. The agencies reported using a range of strategies to address their staffing shortfalls, such as providing employees with language training and pay incentives, recruiting employees with foreign language skills, hiring contractors, or taking advantage of information technology. One of the four agencies has adopted a strategic approach to its workforce planning efforts. In contrast, the other three agencies have yet to pursue overall strategic planning in this area. |
The VA loan program is an entitlement program for eligible veterans, service members, reservists, and surviving spouses. The program provides single-family, residential mortgage loan guarantees for purchasing, constructing, repairing, or refinancing homes. The loan guaranty provides private-sector mortgage lenders, such as banks, thrifts, or mortgage companies, with a partial guaranty on mortgage loans when these loans go into foreclosure. In exchange for the guaranty, VA encourages lenders to offer loans to veterans on terms more favorable than those available with conventional financing—for instance, requiring a small down payment, or none at all. The VA loan guaranty program was initially established in 1944 as an adjustment benefit for veterans who had served in the Armed Forces during World War II. Its objectives have evolved over time. The main objective of the current program is to provide a long-range housing benefit to veterans that will help them finance the purchase of homes on favorable loan terms and retain ownership of their homes. Over the years, the VA loan guaranty program has been amended in an effort to increase home ownership among veterans. These amendments have extended eligibility to all parties on active duty or honorably discharged from military service, increased the maximum loan term and guaranty amount, and allowed borrowers and lenders to negotiate loan interest rates. The basic features of the VA loan guaranty program are set by law. Currently, the maximum amount of a guaranty or entitlement is $50,750. VA places no limits on the size of loans, but lenders generally limit the loan amount to $203,000, owing to secondary mortgage market requirements.In exchange for protection against financial losses when VA-guaranteed loans end in foreclosure, lenders are encouraged to provide eligible borrowers with loans that do not require a down payment. Lenders originating VA guaranteed mortgages are subject to VA’s underwriting standards. The standards are meant to ensure that borrowers have the ability to pay and are creditworthy. The interest rate on VA-guaranteed loans can be negotiated based on prevailing mortgage rates. Borrowers also have obligations to VA. They must meet VA’s eligibility requirements and pay VA funding fees of 1.25 to 2.75 percent of the loan amount, depending on the size of down payment and the type of military service completed. Veterans disabled while in service are exempt from payment of the funding fee. Appendix II provides more detailed background information on VA’s loan guaranty program. In addition to helping borrowers finance the purchase of homes, the VA loan program helps them retain ownership of their homes by providing assistance to those in default through its supplemental servicing program. The supplemental servicing performed by VA’s loan servicing representatives is a unique feature of VA’s loan guaranty program. Other federally insured loan programs do not provide such servicing. For example, HUD delegates all servicing responsibilities to the lenders in its program for FHA insured loans. FHA lenders are required by law to engage in loss mitigation action to provide alternatives to foreclosure. See appendix III for a comparison of the VA and HUD servicing programs. The VA provides supplemental loan servicing through its nine regional loan centers in Atlanta, Cleveland, Denver, Houston, Manchester, Roanoke, Phoenix, St. Paul, and St. Petersburg. Prior to 1996, the VA’s 45 regional offices administered loans, provided full-scale loan servicing, processed claims, and handled property management. However, according to VA officials, the agency decided to consolidate loan processing, servicing, and claims functions into the nine regional loan centers after a comprehensive review of its loan guaranty program. The consolidation, which began in 1997 and was completed in June 2000, was intended to improve services to veterans and reduce costs by increasing efficiency and economies of scale. The 45 regional offices provide services related to property appraisal and foreclosed properties, as well as services related to other veterans’ programs. VA’s loan servicing representatives conduct work using the LS&C computer system. The LS&C system was implemented throughout the regional loan centers during August and September 1999. The LS&C system is an on-line, production-oriented system, which was intended to help the loan servicing representatives to provide better supplemental servicing capabilities. The LS&C system also was intended to help VA reduce costs by allowing servicing personnel to service loans rather than spend time entering basic data and status updates into their old batch- oriented system. While lenders have primary responsibility for servicing delinquent loans, VA performs its own supplemental servicing of defaulted loans to ensure that each veteran-borrower is afforded the maximum opportunity to continue as a home owner during periods of temporary financial distress. VA’s supplemental servicing is intended to protect the interests of the veteran and the government when the lender has not been able to arrange for the reinstatement of a delinquent loan. VA’s loan servicing representatives are to work with veterans and sometimes lenders to arrange or assist in arranging a number of possible alternatives to foreclosure. These alternatives include encouraging lenders to extend reasonable forbearance, encouraging lenders to modify the terms of the original loan agreement, purchasing the defaulted loan from the lender and then reamortizing the loan to eliminate a delinquency, encouraging the private sale of a property, arranging a compromise claim payment to the lender if an offer to purchase the property is received but the proceeds will not be enough to pay off the loan, and accepting a deed in lieu of foreclosure. Private lenders that hold loans guaranteed by VA are responsible for servicing them. Their loan servicing responsibilities generally include collecting monthly mortgage payments, maintaining loan records, and making collection efforts for delinquent loans. According to VA’s Servicing Guide, a lender’s delinquent loan servicing system must include (1) an accounting system that promptly alerts servicing personnel when a loan becomes delinquent, (2) staff trained in servicing loans and counseling delinquent borrowers, (3) procedural guidelines for analyzing each delinquency, and (4) a quality-control system for managing and reporting collection efforts. When a borrower’s loan payments are delinquent, the lender is responsible for contacting the borrower, determining the reason for the delinquency, and making arrangements for repayment of the delinquency, if possible. VA requires lenders to take several steps to resolve the problem. First, a lender must provide written notice to borrowers requesting immediate payment if a loan installment has not been received within 17 days of the due date. This notice must be mailed within 3 days and must include the amount of any late charges due. Second, the lender must try to contact the borrower by telephone to determine why the borrower has not made the payment and to make arrangements for resolving the delinquency. Third, if the borrower has not made a payment within 30 days after the payment was due and cannot be contacted by telephone, VA requires the lender to send a personal letter to the borrower. Fourth, if the lender cannot work out arrangements for repayment by the time that three installments are due, the default is to be reported to VA. The lender must send a Notice of Default (NOD) to VA within 45 days of the third missed payment. This notice must explain why the loan has gone into default and provide a summary of the lender’s servicing efforts. If the lender does not notify VA within 45 days of the borrower’s third missed payment, VA may adjust any claim under the guaranty. Figure 1 provides an example of a time line showing a lender’s servicing responsibilities for a delinquent loan. For example, if a borrower misses a payment on January 1, the lender must send a delinquency notice to the borrower by January 20. If the lender has not received a payment by March 1—the third missed loan payment—the lender must send an NOD to VA by April 15. VA’s policies require that its loan servicing representatives begin supplemental servicing immediately after receiving a NOD from the lender. VA loan servicing representatives are to closely review the lender’s servicing of the account and follow up by contacting the borrower. Based on the information provided by the borrower, regarding present and future income, employment status, and other relevant case-specific facts, the VA loan servicing representatives may attempt to arrange or assist in arranging one of the following alternatives for borrowers: Forbearance: VA’s policy is to encourage lenders to extend reasonable forbearance when a borrower is unable to begin making payments immediately. VA loan servicing representatives may intercede with the lender on behalf of the veteran to work out a plan for forbearance and repayment that is acceptable to both parties. Payments are allowed to remain delinquent for a reasonable amount of time—usually not more than 12 months. After that time, the borrower reinstates the loan either by making a lump-sum payment or by increasing monthly payments. Modification: In some cases, VA also encourages lenders to modify the terms of the original loan agreement—e.g., by extending the loan period. Modifications can succeed when the borrower cannot maintain the original monthly payments or pay off delinquencies, but can keep the loan current on less stringent terms. VA loan servicing representatives may also intercede with the lender on behalf of the borrower to help arrange modification agreements. Refunding: When a lender is not willing to extend further forbearance or modify the terms of the loan, but the borrower has the ability—or will have the ability in the near future—to make payments, VA may refund the loan. In these cases, VA purchases the defaulted loan from the lender. When VA refunds a loan, the loan becomes a part of VA’s direct portfolio and is serviced by VA’s loan portfolio service contractor. VA may reamortize the loan to eliminate a delinquency and reduce the interest rate. The law giving VA this authority does not vest borrowers with any right to have their loans refunded or to apply for refunding. Nevertheless, VA’s policy is to consider in every case before foreclosure whether refunding is in the best interests of the veteran and the government. Private sale of property: When a borrower has no realistic prospects for maintaining even reduced mortgage payments, VA encourages the private sale of property to avoid foreclosure. Counseling by VA loan service representatives about the benefits of a private sale may allow a borrower to salvage any equity in the home and reduce or eliminate losses to all interested parties. When the borrower has equity in the home, VA’s policy is to encourage lenders to grant the borrower reasonable forbearance to permit a sale. Compromise claim: In some cases, a borrower in default may not be able to arrange a private sale because the value of the property is less than the total amount owed on the loan. This might be the case, for example, in areas with depressed housing markets. In such a situation, VA may consider providing a “compromise claim” payment to the lender if an offer to purchase the property is received, but the proceeds will not be sufficient to pay off the loan. For example, if a veteran finds a buyer who will purchase the property for its fair market value and the proceeds of the sale are applied to the existing indebtedness, a compromise agreement would enable VA to pay a claim to the lender to cover the difference between the sale price and the amount remaining on the loan. VA is to consider this if the difference between the loan payoff amount and the purchase price is less than the amount of VA’s maximum guaranty. Deed in lieu of foreclosure: When a borrower is unable to resolve a default, refunding is not appropriate, and a private sale cannot be arranged, VA may consider accepting a deed in lieu of foreclosure. VA will accept a deed if it is in the best interests of both the borrower and VA. Accepting a voluntary deed saves on foreclosure costs, cuts down on possible decreases in the value of the security, avoids having a foreclosure on the borrower’s credit record, and reduces or eliminates the amount of the borrower’s indebtedness. However, obtaining a deed must be legally feasible, and the borrower must be willing to cooperate. A deed in lieu will usually not be accepted if there are any junior liens on the property or if the claim amount under the deed in lieu is more than under foreclosure. Figure 2 provides a simplified example of the decisionmaking process VA loan servicing representatives use when considering alternatives to foreclosure. VA or the lender may implement any of the alternatives to foreclosure discussed above, except only VA may implement refunding. Additionally, VA must approve in advance lender initiated compromise claims and deeds in lieu of foreclosure, unless a lender participates in VA’s Servicer Loss Mitigation Program (SLMP). Participation in SLMP allows lenders to not only initiate, but also perform most of the analyses involved in approving compromise claims or deeds in lieu of foreclosure. VA pays lenders a fee for processing such alternatives to foreclosure. The purpose of VA’s SLMP program is to (1) reduce the cost of the loan guaranty program to the taxpayer by decreasing the length of time required to implement these alternatives, (2) reduce the workload of VA’s regional offices by paying authorized servicers to perform the analysis and approval functions usually completed by VA, and (3) increase the number of these alternatives used by providing servicers with an incentive to consider a compromise sale or deed in lieu of foreclosure at earlier stages of default, when these alternatives are more often feasible. Lenders must apply to VA to obtain approval to participate in this program. VA officials told us that approximately 130 lending institutions are currently participating in VA’s SLMP program and that these institutions process most of the compromise claims and deeds in lieu of foreclosure. If it is not feasible for VA or the lender to process any of the alternatives to foreclosure discussed above, the lender will generally proceed with foreclosure. The practices of the three regional loan centers we visited generally conform to VA policies and procedures. VA’s 351 loan servicing representatives worked with veterans and lenders to complete more than 10,500 alternatives to foreclosure in fiscal year 2000. However, the operations of VA’s regional loan centers were temporarily affected by consolidations in certain regions. The practices of the three regional loan centers we visited generally conform to VA policies and procedures. The loan servicing representatives follow standard VA policies and procedural manuals for supplemental loan servicing and conduct work using VA’s LS&C computer system, which is also standard across the regional offices. These standard policies, manuals, and computer system serve to create uniformity among the nine regional loan centers. The regional loan centers we visited also had procedures in place to ensure that loan servicing representatives comply with VA’s policies and procedures. These quality control procedures are also standard across all of VA’s regional loan centers. The primary objective of VA’s quality control is to promote and maintain a high level of quality and consistency in services and end products. VA’s Statistical Quality Control (SQC) reviews are to be conducted on a monthly basis. Cases are to be selected randomly and reviewed for compliance with VA’s quality criteria. VA’s procedures contain specific guidance and criteria for reviewing each case. VA uses the SQC index to measure the number of appropriate actions found during SQC reviews, calculated as a percentage of total actions reviewed. This index is provided in VA’s performance report and is intended to reflect the accuracy of VA processing, which can affect both customer satisfaction and VA’s efficiency. Over the past 5 years, VA has received an average of nearly 122,000 NODs per year. A large majority—on average, nearly 70 percent—of defaulted loans are reinstated without the VA intervention. VA’s loan servicing representatives have been able to implement VA’s alternatives to foreclosure in an average of about 10 percent of the cases in which loans default annually. On average, another 20 percent of defaulted VA loans have gone to foreclosure each year. (See fig. 3.) Over the past 5 years, VA has completed an average of about 12,400 alternatives to foreclosure each year. The most common alternative VA’s loan servicing representatives implement is what VA calls a successful intervention, which includes VA involvement in lenders granting either forbearance or modifying the delinquent loan. These successful interventions account for an average of 42 percent of all alternatives to foreclosure implemented by VA over the past 5 years. The next most frequent alternative implemented was the compromise claim, followed by refunding, and then deed in lieu of foreclosure. (See fig. 4.) Appendix V provides additional data on supplemental loan servicing by regional loan centers. At the end of fiscal year 2000, VA’s nine regional loan centers had a total of 351 full-time employees working specifically on the loan service and claims functions. The Cleveland Regional Loan Center was the largest, with a total of 52 employees; and the Manchester center was the smallest, with 12 employees. Employees at the nine regional loan centers handled an average of 294 NODs each, during fiscal year 2000. The Atlanta Regional Loan Center had the highest number—361 NODs per employee. The Cleveland center had the lowest number—237 NODs per employee. Figure 5 shows the number of loan servicing employees and the number of NODs per employee in fiscal year 2000 at each regional loan center. It also shows the state jurisdictions serviced by each regional loan center, as well as the year of each center’s consolidation. Both the number of employees and average number of NODs they handle have varied over the years, mostly because of the consolidation of the regional loan centers. In fiscal year 1996, before the consolidation of the regional loan centers, VA had a total of 430 employees performing loan service and claims functions. In fiscal 2000, after the consolidation was completed, that number fell to 351 — a decrease of approximately 18 percent from 1996. The average number of NODs per employee rose from 292 in fiscal year 1996 to 397 in fiscal year 1998. However, in fiscal year 2000, the number of NODs per employee dropped to 294, the level prior to consolidation in 1996. (See fig. 6.) As figure 6 indicates, consolidation ultimately reduced the number of employees handling loan servicing and claims; although the average number of NODs per employee remained about the same. However, the consolidation left some offices short staffed for a period of time, and this temporarily affected service. For example, officials at the St. Petersburg Regional Loan Center told us that it took their center nearly a year to catch up with the backlog of loan cases, some of which were transferred from other regional loan offices. VA’s officials in St. Petersburg also told us that they expected their office to be permanently closed; and they completely stopped servicing loans for a period of time, as they prepared for the move. In addition, officials at the Phoenix Regional Loan Center told us that at one point during consolidation, two of its loan service representatives were responsible for servicing approximately 2,300 defaulted loans—six times the workload that is considered reasonable because few employees of the closed offices were willing to relocate to Phoenix after the consolidation. However, we have since learned that the Phoenix Regional Loan Center, which was the last to complete its consolidation in July 2000, is now almost fully staffed and has achieved a reasonable number of NODs per employee. The Phoenix center, however, continues to have a large number of relatively new loan servicing representatives, and it will take time for them to be fully trained. While the consolidation helped to centralize the loan servicing function, each regional loan center we visited still had a high degree of administrative autonomy from the VA headquarters in Washington, D.C. As a result, administrative practices vary somewhat among centers. For example, the St. Petersburg center management told us that they follow a “case management” approach to supplemental loan servicing. This center has teams that are responsible for all aspects of loan administration—from servicing to processing foreclosures. Teams at the Phoenix center, however, are organized more along the lines of a functional structure where each team is responsible for a particular loan administration function. Additionally, teams in various regional loan centers may have different internal management structures. For example, managers in the St. Petersburg center told us that their five loan servicing teams operate autonomously. Each team has “empowered” loan servicing representatives that rotate within the team and serve as the team leader. These team leaders serve as a focal point for the team and review the work of other team members. They are also empowered to approve all alternatives to foreclosure without further supervisory approval. This was not the case, for example, at the Cleveland center. Additionally, the management of the St. Petersburg center told us that the teams, rather than individuals, are responsible for meeting internal performance goals. They said the teams have become competitive among themselves and that this has improved performance. VA headquarters managers told us that they plan to complete a comprehensive review of their loan servicing program in the near future that will include a review of such administrative practices at the regional loan centers. VA’s ability to effectively manage its supplemental servicing program is hindered by a lack of meaningful performance measures and useful and timely management reports. VA’s FATS ratio has not been a meaningful measure of VA’s supplemental servicing performance. The shortcomings of this measure include its (1) insensitivity to the quality of loan servicing, (2) inability to account for regional differences in economic conditions, and (3) inability to reflect the ultimate disposition of a particular loan. In addition, VA does not have a meaningful performance measure to account for the costs associated with alternatives to foreclosure compared with foreclosure. Moreover, VA’s computer system has not been able to generate useful and timely management reports that regional loan center managers and VA’s headquarters staff can use in managing their supplemental servicing program. During our review, we also found that VA could not efficiently generate reliable aggregate data on its supplemental servicing program. The FATS ratio is equal to the number of cases resolved through direct VA intervention, divided by this number plus foreclosures. The total number of cases resolved through direct VA intervention is the sum of all cases involving any of the alternatives to foreclosure. According to VA’s fiscal year 2001 Performance Plan, VA has set a goal of raising the FATS ratio to 40 percent. This would mean that VA’s interventions helped 40 percent of veterans facing foreclosure resolve their defaulted loans using one of VA’s alternatives to foreclosure. In fiscal year 2000, the FATS ratio was 30 percent. Before fiscal year 1999, VA calculated the FATS ratio by a different method, weighting the various alternatives to account for the difficulty of implementing them and the benefits they offered. After a review of the FATS ratio in September of 1999, VA officials said they decided to drop the weighting system because it encouraged the use of alternatives that may not have been the best choice and distorted the number of actual interventions taken. To present comparable data over time, we calculated the unweighted FATS ratio—the measure VA currently uses—based on the aggregate data VA provided to us. Figure 7 shows the nationwide FATS ratio for fiscal years 1996 through 2000. Figure 8 shows the FATS ratio at each of VA’s regional loan centers in fiscal year 2000. While the FATS ratio is intended to reflect the level of activity performed by VA on behalf of veterans, it presents a number of problems. First, it is not sufficiently sensitive to changes in servicing levels, and thus it has not varied much over time. It has not been possible, in some cases, to observe changes in the FATS ratio due to loan servicing difficulties associated with the regional office consolidation at the time the servicing was affected. For example, when the St. Petersburg Regional Loan Center stopped servicing loans, the impact on the FATS ratio appeared to be minimal. However, the ratio is actually lower for fiscal year 2000 than for the period that includes the consolidation. Representatives from the Phoenix center told us that they had similar problems during the regional office consolidation. (Appendix V provides specific information on the FATS ratio at each regional loan center over the past 5 years.) A VA headquarters official said that processing alternatives to foreclosures requires a long time; and such a time lag could allow VA to take credit for loan servicing provided much earlier, evening out the FATS ratio over time. Additionally, other factors that are unrelated to the actual performance of VA loan servicing representatives may also affect the FATS ratio. For example, when lenders participate in the SLMP program, VA loan servicing representatives must provide the lenders with a determination of insolubility; and, because of this involvement, these alternatives are still counted in the FATS ratio. Second, the FATS ratio does not account for regional differences in economic conditions, although regional economic conditions may affect the ability of loan servicing representatives to implement various alternatives to foreclosure. For example, according to VA regional loan center officials, recent economic conditions in southern California resulted in lower home prices, making it nearly impossible for loan servicing representatives to arrange compromise claims. This occurs because decreases in home prices increase the amount needed to pay a claim, and VA will not offer a compromise claim if the amount of the payout under the compromise claim is greater than the claim under a foreclosure. According to VA documents, during VA’s September 1999 review of the FATS ratio, two regional loan center directors expressed concern about using the FATS ratio as a performance measure, primarily because they believed that economic factors could severely affect it. For instance, if there is a substantial increase in the number of foreclosures, the directors maintained that even a loan center with experienced, productive loan servicing representatives might not be able to significantly raise the number of alternatives to foreclosure counted in the FATS numerator. In this case, the FATS ratio would decline. VA officials said that when VA managers look at the FATS ratio for individual regional loan centers, local teams, or individual loan servicing representatives, they must take into account the local economic conditions, which impact that performance, as well as other factors such as staffing and training levels. We note, however, that VA does not have a systematic method to account for the impact of regional economic conditions or other such factors. Third, the FATS ratio does not take into account the ultimate disposition of a particular loan. It only accounts for individual servicing events, so a loan, which had a successful intervention at one point, could ultimately default. VA provided data on previous interventions on loans that eventually ended in foreclosure, and the percentage appeared to be small—an average of 1.6 percent over the past 5 years. Nevertheless, if this number were to increase in the future for any reason, this consideration may be important when reviewing the performance of regional loan centers. In other words, the benefit to the veteran from the intervention depends on the duration over which the veteran remains in the home due to the intervention. In addition, the FATS ratio is intended to measure the benefits of VA’s loan servicing program but omits another important component: cost reduction. In fact, VA officials told us that they have not tracked the costs associated with the various alternatives to foreclosure. To provide a very broad estimate of the cost-effectiveness of VA’s supplemental servicing, VA officials told us that they multiply the average claim paid by the number of cases in which VA intervention prevents foreclosure. In the last few fiscal years, VA officials said they have made claim payments averaging around $19,000 and arranged some 6,000 successful interventions. They concluded, based on these rough calculations, that the government had saved more than $100 million by avoiding the payment of claims in these cases, even after personnel and overhead costs were factored in. VA officials said that their previous computer system did not have reports designed for tracking average claims paid on deeds in lieu of foreclosure, and the amount paid for compromise claims was not captured within the system. Officials said that the LS&C computer system tracks these amounts, but it is still undergoing development; and reports are still being developed. VA’s FATS ratio reflects the level of activity performed by VA on behalf of veterans. However, VA does not have an effective way to measure the cost savings its supplemental servicing program generates. Other agencies, such as FHA, do have such a measure. FHA, for example, calculates a lender performance score based, in part, on the lender’s success in holding down costs to FHA while reinstating or terminating defaulted mortgages. FHA effectively creates a benchmark by comparing the performance of each FHA lender with the performance of other lenders in the same jurisdiction. Although VA cannot use FHA’s benchmark, because the unit of observation for VA is the regional loan center, VA could create benchmarks that account for variations in economic conditions; legal requirements, such as different state foreclosure laws; and other factors that vary among its nine regions. Once it is fully implemented, the LS&C system appears to provide the potential for VA to significantly improve its ability to assess the costs and benefits and improve the management of its supplemental servicing program. Over time, as VA’s LS&C computer system obtained extensive data on defaulted loans, the system could be used to create measures for data items such as the average cost of the various alternatives to foreclosure. The system could also be used to create benchmarks. For example, VA could use its database to analyze how trends in alternatives to foreclosure and foreclosures over time and across regions are related to economic conditions in those regions. Economic conditions in a region at each point in time can be measured by variables such as an unemployment rate. In addition, we have identified another potentially useful variable to establish benchmarks. The Office of Federal Housing Enterprise Oversight, the safety and soundness regulator of the two government- sponsored housing enterprises, Fannie Mae and Freddie Mac, has created a quarterly housing price index for regions, states, and metropolitan areas. With such resources, VA could take into account, for example, how a decline in regional housing prices contributes to higher VA costs, rather than necessarily attributing higher costs strictly to the performance of the regional loan center’s supplemental servicing activity. To date, regional loan center managers and headquarters staff have not had useful and timely reports that would help in managing the supplemental servicing program. Managers at each of the three regional loan centers we visited told us that since VA implemented the LS&C system, such management reports were not available. They said that VA headquarters staff had been working with the regions to reach a consensus on the types of management reports that would be most useful, however. Regional loan center managers also described problems with the quality of the data generated by the LS&C system. They said that the LS&C had been undercounting the number of alternatives to foreclosures completed. For example, a Phoenix manager said that the regional loan center was not credited for about 30 compromise claims processed by one service representative. VA headquarters asked regional office managers to collect information on loan servicing manually from November 2000 through February 2001 for comparison with data generated from the LS&C system. We also found that VA’s computer system could not efficiently generate timely and reliable aggregate data. During this engagement, we requested that VA provide us with basic data on its supplemental servicing program. VA took more than 4 months to provide the data, and some data could not be provided within our time frame. We identified numerous inconsistencies in the data VA initially provided to us and had to request revisions even to basic data on the numbers of alternatives to foreclosures processed and the FATS ratio. VA headquarters management said that the lack of a reporting capability has been the largest single issue that it has had to address in the LS&C system. VA headquarters management told us that the decision to implement the system in September of 1999 was made with assurances from VA’s Office of Information Technology that a reporting mechanism would be in place within 3 months of implementation. This deadline passed with no reporting system. Six months after implementation, a short-term reporting mechanism was developed that extracted data from the production database, reformatted it as a legacy Liquidation and Claims System master record, and then used legacy report programs to generate reports. VA officials said that this effort resulted in some inaccurate reports, which caused regional loan center managers to be skeptical about the results of all of the reports. By the fall of 2000, about 1 year after implementing the LS&C system, VA officials told us the LS&C reporting mechanism became available. However, VA officials said they are still in the process of feeding data into the data warehouse. Officials said they are also working on getting business language data definitions and calculations defined, written, published, and concurred upon. These data definitions and calculations, once agreed upon and implemented, would help ensure consistency in the way regional loan centers account for their work. VA officials told us they expect to have some reports in place by the end of April 2001. VA’s supplemental servicing program seeks to help veterans when they cannot pay their mortgages. The program offers a range of alternatives to foreclosure that are intended to protect the interests of the veteran and the government. VA recently completed the consolidation of 45 regional offices into 9 regional loan centers that provide supplemental servicing. This consolidation resulted in some temporary disruptions in service, but the centers are now fully operational. VA’s ability to effectively manage its supplemental servicing program has been affected by two issues. First, VA does not have meaningful performance measures that allow it to accurately assess the effectiveness of its program. The full implementation of the LS&C computer system appears to provide the potential for VA to significantly improve its ability to assess both the benefits and costs of its supplemental servicing program. This system could be used to create benchmarks that would help mitigate some of the shortcomings of the FATS ratio. It could also be used to create a measure of cost savings. While not the primary goal of the program, costs savings should be a consideration in the program’s management. Other agencies, such as HUD’s FHA loan program, have such a measure. Second, VA’s computer system has not been able to generate useful and timely management reports that regional loan center managers and headquarters staff can use in managing their supplemental loan servicing program. VA managers have acknowledged that this has been the largest, single issue that they have had to address with the LS&C system and said they are working to correct this problem. However, there have been numerous delays in the development of management reports that have affected their ability to effectively manage their supplemental servicing program. We recommend that the Secretary of the Veterans Affairs direct VA’s Under Secretary for Benefits to develop meaningful performance measures for the nine regional loan centers. The overall framework could include creating performance benchmarks that take into account the impact of economic conditions and legal requirements on VA’s ability to reduce the number of foreclosures while holding down costs. The overall framework could also take into account the benefits of alternatives to foreclosure for veteran-borrowers, perhaps using a FATS ratio in conjunction with performance benchmarks. We also recommend that the Secretary of the Veterans Affairs direct VA’s Under Secretary for Benefits to take action to ensure that improvements are made in a timely fashion to its computer system so that it can generate accurate and useful management reports. These actions would include current initiatives to provide consistent business definitions of the alternatives to foreclosure. In addition, to implement the first recommendation, the actions would include compilation of data—such as average costs of alternatives to foreclosure and house price movements in the region—that could be used to assess benefits from supplemental servicing and to create benchmarks for regional loan center performance. The Secretary of Veterans Affairs (the VA Secretary) provided written comments on a draft of this report, and these comments are reprinted in appendix VI. VA and HUD also provided technical comments, which we incorporated into this report where appropriate. In particular, we clarified the report and our recommendation to reflect that VA’s LS&C system itself does not produce management reports, but that data from the LS&C system are entered into a data warehouse from which reports are produced. The VA Secretary agreed with our recommendation that VA improve its computer system so that it can generate accurate and useful management reports. He stated that VA is strongly committed to this effort and discussed a number of steps VA is taking to improve the system that should lead to improved management reports. The VA Secretary disagreed with our recommendation that VA develop meaningful performance measures, including considerations of cost savings, for the nine regional loan centers. He said that the FATS ratio is a meaningful national measure of supplemental servicing performance and that VA did a recent study confirming this. He also said VA has concluded that it is not wise to include cost savings in a performance measure that is intended to reflect assistance to veterans. As demonstrated in our report, while the FATS ratio is intended to reflect the level of activity performed by VA on behalf of veterans, it has a number of shortcomings. The full implementation of the LS&C computer system, however, appears to provide the potential for VA to improve upon this measure of benefits. An improved performance measure with appropriate benchmarks could provide a systematic way for regional managers to assess and improve the outcome of their work in providing benefits to veterans. Additionally, VA’s policy states that supplemental servicing is intended to protect the interests of the veteran and the government. While not the primary goal of the program, costs and cost savings—or protecting the interests of the government—should be a consideration in the program management. We have clarified the language in the report to reflect that the FATS ratio should not necessarily be adjusted to account for costs or cost savings, but rather that some more accurate measure of costs and cost savings should be developed and considered. The full implementation of the LS&C system also appears to provide the potential for VA to develop such a measure. We will send copies of this report to the Chairmen and Ranking Minority Members, House Committee on Veterans’ Affairs and Subcommittee on Benefits, House Committee on Veterans’ Affairs; Chairman and Ranking Member, Senate Veterans’ Affairs Committee; Secretary, Department of Veterans Affairs; and other interested parties. We will also make copies available to others upon request. Please contact me or William B. Shear at (202) 512-8678 if you or your staff have any questions. Major contributors to this report are listed in appendix VII. The objectives of this report are to (1) describe the Department of Veterans Affair’s (VA) policies and procedures for servicing troubled home loans, (2) assess VA’s implementation of its policies and procedures for servicing troubled home loans, and (3) analyze VA’s measures for assessing the effectiveness of its program for servicing troubled loans and ability to generate meaningful data for overseeing and improving loan servicing. To describe VA’s policies for servicing troubled home loans, we reviewed VA manuals and documents and interviewed officials from the VA, the Mortgage Bankers Association, three veteran’s service organizations, and the Reni Mae Corporation. We reviewed materials provided to us by the Reni Mae Corporation related to a proposal to assist VA in helping veterans who faced possible foreclosure. For purposes of comparison, we interviewed officials from the Department of Housing and Urban Development (HUD) about the agency’s policies for servicing Federal Housing Administration (FHA) insured, single-family residential mortgage loans. We did not assess these policies. To assess VA’s implementation of policies for servicing troubled home loans, we visited regional loan centers in Cleveland, OH; St. Petersburg, FL; and Phoenix, AZ. We obtained information on their supplemental servicing activities and interviewed VA officials, including loan servicing representatives. We also reviewed VA’s quality-control procedures. To analyze both VA’s measures for assessing the effectiveness of its supplemental servicing program and the agency’s ability to generate meaningful data that can be used in its overseeing and improving its loan servicing program, we reviewed VA’s performance measures and requested data on defaults, foreclosures, and alternatives to foreclosure for the nine VA regional loan centers. In addition to analyzing this data, we interviewed VA regional loan center and Washington headquarters officials about data collection and performance measures for the supplemental servicing program. While we identified inconsistencies in VA data during our review, we did not assess the accuracy of the data. For the purposes of comparison, we reviewed the performance measures HUD uses to assess the effectiveness of its FHA program for servicing troubled loans. We did not analyze HUD’s performance measures. We conducted our work in Washington, D.C.; Cleveland, OH; St. Petersburg, FL; and Phoenix, AZ between July 2000 and March 2001, in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Veterans Affairs. His written comments are presented in appendix VI. We also obtained technical comments from VA, which we incorporated in this report as appropriate. In addition, we obtained technical comments from HUD officials on our description of HUD policies for servicing FHA- insured single-family residential mortgage loans. We incorporated HUD’s technical comments in this report where appropriate. The first section of this appendix provides general background information on the VA single-family mortgage guaranty program. The second section provides information on the number and average amount of loans VA has guaranteed since 1996. The VA loan program is an entitlement program that provides single- family, residential mortgage loan guarantees for eligible veterans, service members, reservists, and surviving spouses. VA loans cover the purchase, construction, repair, and refinancing of homes. The loan guaranty provides private sector mortgage lenders, such as banks, thrifts, and mortgage companies with a partial guarantee on mortgage loans when loans go into foreclosure. In exchange for the protection that the VA guaranty provides lenders, VA encourages lenders to provide small or no-down-payment loans to veterans. Currently, the maximum guaranty on a VA loan is $50,750. While VA places no limits on the maximum loan amount that a veteran may obtain, lenders generally limit the amount to $203,000 because of secondary market requirements. To obtain the loan, veterans must meet VA’s eligibility and underwriting requirements. To help support the program, VA requires that veterans pay a funding fee to VA. The subsidy cost of VA loan guarantees and direct loans are financed by credit subsidy appropriations to the Veterans Housing Benefit Program Fund. The details of the basic features of the VA loan program are described below. Although VA generally encourages lenders to provide no-down-payment loans to veterans, in certain cases a down payment is still required. According to the VA Lender’s Handbook, lenders usually require that a veteran make a down payment when the purchase price of the property exceeds a “reasonable value” or the loan is a graduated payment mortgage in which the monthly mortgage payments gradually increase. In addition, lenders usually require a down payment if the amount of the guaranty is less than 25 percent of the loan amount. In such cases, the down payment will equal the difference between the amount of the guarantee and 25 percent of the loan—a requirement imposed by the secondary mortgage market, in which VA loans and other types of mortgage loans and mortgage-backed securities (MBS) are bought and sold. Most VA-guaranteed loans are pooled to support MBS guaranteed by the Government National Mortgage Association (Ginnie Mae), a government corporation within HUD. Ginnie Mae guarantees the timely payment of interest and principal on MBS backed by cash flows from pools of federally guaranteed mortgage loans, such as VA-guaranteed and FHA- insured loans. The MBS are sold to private investors, including pension funds, life insurance companies, and individuals. VA currently allows veterans and lenders to negotiate the interest rate on VA-guaranteed loans based on prevailing mortgage rates. The maximum loan term is 30 years and 32 days. The amount of the guaranty depends on the original loan amount and whether the veteran has previously used the entitlement to housing loan benefits. (See table 1.) Currently, the maximum guaranty is $50,750. In addition, the law allows a veteran who has previously obtained a VA- guaranteed loan, but has not used the maximum entitlement, to obtain another loan using the amount remaining under the entitlement. While VA places no limits on the size of loans veterans obtain, lenders generally limit VA-guaranteed loans to $203,000, or four times the VA guaranty, the limit used by the secondary mortgage market. Eligibility for a VA-guaranteed loan is based on active duty service after September 15, 1940. At least 90 days of active duty service is required for wartime veterans; 181 days for peacetime veterans; and 2 years for veterans who enlisted after September 7, 1980, or entered as an officer after October 16, 1981. Members of the Reserves and National Guard are also eligible if they have completed at least 6 years of service. In addition, the unmarried surviving spouse of a veteran who has died or is missing owing to service-connected causes is considered eligible. However, because there are numerous exceptions to the service requirements, VA requires that veterans apply to VA to determine their eligibility. Veterans are also responsible for selecting a lender that will honor the certificate of eligibility. Lenders of VA-guaranteed loans are required to follow VA’s general underwriting guidelines for evaluating and verifying an applicant’s financial status. Lenders must calculate an applicant’s residual income and debt-to-income ratio when making a loan decision. According to the VA Lender’s Handbook, the residual income is the amount of net income remaining after deducting debts, obligations, and monthly living expenses such as food, health care, and clothing. The debt-to-income ratio is the ratio of total monthly debt payments (i.e., housing expenses and debts) to gross monthly income. To qualify for a VA-guaranteed loan, VA requires that an applicant’s residual income be equal to or greater than a required minimum for the applicant’s loan size, family size, and region of the country, and that the applicant’s debt-to-income ratio be generally less than 41 percent. According to the VA Lender’s Handbook, VA advises that lenders exercise flexibility and sound judgment in making loan decisions. To help support the program, veterans are required to pay a funding fee to obtain a VA-guaranteed loan. Currently, veterans who have served on active duty are required to pay 2 percent of the loan amount, while those who have served in the Reserves or National Guard pay 2.75 percent of the loan amount. (See table 2.) Congress periodically changes the funding fee rates to reflect changes in the cost of administering the program or to assist a certain class of veterans. The funding fee rates also vary by loan type and down payment amount. In addition, veterans who have previously used the entitlement pay higher funding rates than those using it for the first time. Veterans with service-connected disabilities or their surviving spouses are exempt from paying funding fees. Under the Federal Credit Reform Act of 1990, loans guaranteed after September 30, 1991 are financed by credit subsidy appropriations to the Veterans Housing Benefit Program Fund (VHBPF) Program Account. This account also receives an appropriation for administrative expenses. Funding fees paid by veteran borrowers are deposited in the VHBPF Guaranteed Loan Financing Account, a nonbudget account that records all nonsubsidized cash flows of credit transactions. In fiscal year 2000, VA guaranteed approximately 199,000 loans, a significant drop from the previous year’s figure of approximately 486,000 loans. (See fig. 9.) Meanwhile, the average amount of a VA-guaranteed loan has steadily increased from approximately $102,000 in 1996 to $117,000 in 2000. (See fig. 10.) The first section of this appendix provides general background information on the FHA mortgage insurance program. The FHA mortgage insurance program, administered by HUD, shares some characteristics with VA’s loan guarantee program. Appendix IV provides a comparison of VA and FHA policies for servicing troubled loans. The second section compares the loan performance of VA-guaranteed loans with that of FHA- insured loans. This comparison is intended to provide further perspective on VA and FHA loan programs, and not to define any linkages. The second section also briefly discusses a number of factors that affect the probability that a borrower will default on a mortgage. Established by the National Housing Act of 1934, FHA insures mortgages made by qualified lenders for the purchase or refinancing of homes. A primary goal of the FHA mortgage insurance program is to assist households that may be underserved by the private market, many of them low-income and first-time homebuyers. Like the VA guarantee, FHA mortgage insurance helps reduce financing costs for borrowers by protecting lenders against the risk of loan default. FHA-insured loans generally sell on the secondary mortgage market in the form of MBS guaranteed by Ginnie Mae. FHA loans are protected by FHA’s Mutual Mortgage Insurance Fund, which is funded by borrower premiums. As with a VA mortgage guaranty, the main advantage of FHA mortgage insurance is that the criteria for qualifying for credit are not as strict as they are for conventional financing. FHA generally allows potential home owners to finance approximately 97 percent of the value of their home purchase through their mortgage. Thus, borrowers can make a minimum down payment of 3 percent of the value of their home. FHA insurance also allows borrowers to finance many closing costs, so that actual loan amounts can exceed 97 percent of home value. Like the VA program, FHA insurance also limits some of the fees lenders may charge borrowers for making loans. The origination fee, charged by the lender for the administrative cost of processing the loan, may not exceed 1 percent of the mortgage amount. FHA sets limits on the dollar value of the mortgage loan. Borrowers seeking mortgages that exceed FHA loan limits can increase their down payment or obtain financing under a conventional mortgage. Borrowers pay an up-front insurance premium at the time of purchase that is generally added to the mortgage and regular mortgage payment. While the VA program guarantees fixed-rate residential mortgage loans, up to 30 percent of the mortgages FHA insures annually can be adjustable- rate mortgages (ARM). ARMs insured by FHA have had higher delinquency and foreclosure rates than fixed-rate mortgages. To cover the costs of FHA loans, HUD imposes up-front and annual mortgage insurance premiums on home buyers. The up-front premium, which is charged when borrowers close on the loan and can be included in the mortgage payment, is 1.5 percent. The annual mortgage insurance premium, which is 0.25 to 0.50 percent, depending on the loan term, is automatically canceled when the loan amount is reduced to 78 percent of the sales price or appraised value at time of loan origination, whichever is less. The VA-guaranteed loans and FHA insured loans tend to perform similarly. (See figs. 11 and 12.) We did not compare VA-guaranteed loans with conventional private loans because VA-guaranteed loans generally require no down payment. Our comparisons of VA and FHA loan performance are based on data we collected from the Mortgage Bankers Association (MBA). The MBA reports the percentage of loans outstanding during each quarter of a calendar year. We used data for VA and fixed-rate FHA residential mortgage loans, because VA currently guarantees only fixed- rate mortgages. According to VA officials, VA had authorization to guarantee ARMs during fiscal years 1993, 1994, and 1995. However, MBA does not report separate data for VA ARMs. VA officials told us that in fiscal year 1993 about 2 percent of their guaranteed loans were ARMS. This number increased to 11 percent in 1994 and 20 percent in 1995. Several factors affect both the probability that a borrower will default on a mortgage and the severity of the loss when foreclosure occurs. These factors include the following: Negative borrower equity—a condition that occurs when the current loan balance is greater than the current value of the mortgaged property. Negative borrower equity can occur if home prices decline in a particular geographic area. The age of the mortgage—the age of the mortgage affects the current loan balance, due to amortization of outstanding loan principal. Mortgage defaults and foreclosures tend to peak between the fourth and seventh years after mortgage origination. Original loan to value (LTV)—loans with a higher LTV at origination are more likely to experience negative equity when house values decline. Adverse conditions that affect a borrower’s ability to repay—the loss of a job, divorce, or the death of spouse can trigger borrower’s failure to make scheduled mortgage payments. These conditions, combined with severe negative borrower equity, increase the likelihood of foreclosure and large loss severity. To provide a general perspective on VA policies for servicing troubled loans, we compared VA policies with HUD’s. This appendix highlights major differences and similarities between VA and HUD policies for servicing troubled loans. HUD administers the Loss Mitigation Program for servicing FHA-insured loans. A general description of the FHA single- family mortgage insurance program may be found in appendix III. The loan servicing programs of VA and HUD have similar objectives: (1) to help their borrowers avoid foreclosure and (2) to minimize financial losses. However, the agencies use different means to achieve these objectives. They differ in the level of servicing responsibilities that are placed on their lenders and in the types of alternatives to foreclosure they offer. While VA performs its own supplemental servicing, FHA lenders are required to engage in loss mitigation for the purpose of providing alternatives to foreclosure. FHA lenders have full authority to offer any of HUD’s alternatives to foreclosure without prior HUD approval. In contrast, VA lenders are free to discuss all alternatives with borrowers, but they must obtain prior VA approval before processing some of VA’s alternatives to foreclosure. The VA’s Servicer Loss Mitigation Program (SLMP), introduced in 1993, gave participating lenders authority to offer both the deed in lieu of foreclosure option and compromise claims. SLMP thus provides lenders with much the same level of authority HUD lenders enjoy. However, participation in the SLMP is optional, but participation in HUD’s Loss Mitigation Program is mandatory. Additionally, VA must provide a “determination of insolubility” before SLMP lenders can proceed with either a deed in lieu of foreclosure or compromise claim. Both VA and HUD encourage their lenders to utilize alternatives to foreclosure, which are less costly and time consuming than foreclosure proceedings. These alternatives include forbearance, loan modification, and private sale of property. In addition, each agency offers alternatives to foreclosure that the other does not. One alternative that HUD offers for its loans that VA does not is called the partial claim. (See table 3.) Using this alternative, HUD essentially provides the borrower with an interest-free second loan on the property in the amount necessary to reinstate the delinquent loan. The borrower is not required to repay this loan until the first mortgage is paid in full or the property is sold. Refunding is one VA alternatives to foreclosure that HUD does not use. Under this alternative, VA may purchase a defaulted loan from a lender and then reamortize the loan to eliminate a delinquency. Reflecting the different roles lenders play in servicing troubled loans, cash incentives lenders receive from VA and HUD for offering alternatives to foreclosure also differ. (See table 3.) VA pays cash incentives only to SLMP lenders that process compromise claims and deeds in lieu of foreclosure. HUD pays lenders cash incentives for offering any of its alternatives to foreclosure. This appendix provides details of the data we presented in the report. Table 4 provides details of VA’s supplemental servicing activities from fiscal years 1996 to 2000; table 5 lists such details by each regional loan center. Table 6 provides details of changes in loan servicing staff and the Notice of Defaults (NOD) per employee at each regional loan center, from fiscal years 1996 to 2000. Finally, table 7 provides details of the Foreclosure Avoidance Through Servicing (FATS) ratio at each regional loan center, from fiscal years 1996 to 2000. The VA Secretary said VA conducted a study in 1999 and 2000 (VA’s 1999 study) that concluded that the national FATS ratio was the best measure of VA’s supplemental servicing activities and that such a measure accounted for local economic conditions and legal requirements. In reaching our conclusions and making our recommendation, we reviewed and considered information VA supplied on its 1999 study. According to the information provided, in September 1999 the Loan Guaranty Service convened a group of headquarters and regional VA personnel to review FATS and various alternatives. In all, six alternatives were considered based on eight criteria. The unweighted FATS ratio met all criteria. In contrast, a measure similar to FATS, but adjusted in some manner to account for local economic factors beyond VA control, did not meet three criteria: proposal must be supportable, reliable and easily validated, proposal must have a clear and simple approach, and the group must reach a consensus on the recommendations. Based on the information provided, it did not appear that the group then considered the future potential of the LS&C computer system to provide improved performance measures for benefits to veterans or cost savings to the government. With full implementation of the LS&C computer system, we reached the conclusion that VA should develop new performance measures for benefits to veterans and cost savings to the government that can be compared across the nine regional loan centers. In addition to those named above, Kyong Lee and Kristi Peterson made key contributions to this report. | The Department of Veterans' Affairs (VA) Loan Guaranty Program, which guarantees mortgage loans for qualified lenders, provides additional assistance to those who face financial hardship and possible foreclosure. This report discusses VA's supplemental loan servicing program. GAO (1) assesses VA's implementation of its policies and procedures for servicing troubled loans and (2) analyzes VA's measures for assessing the effectiveness of its supplemental servicing program and ability to generate meaningful data for overseeing and improving loan servicing. GAO found that the three regional loan centers it visited generally conformed with VA policies and procedures and had procedures in place to ensure that VA's loan servicing representatives complied with VA policies and procedures. Two issues affect VA's ability to effectively manage its supplemental servicing program. First, VA lacks meaningful performance measures that would allow it to accurately assess the effectiveness of its program. Second, VA's computer system has been unable to generate useful and timely management reports that regional loan center managers and headquarters staff could use to manage their supplemental loan servicing program. |
CBSX provides worldwide asset visibility over the Army’s reportable equipment items, including the Army’s most critical war fighting equipment. The objective of CBSX is to provide accurate, timely, and auditable equipment balances for major items necessary for the direct support of troops, such as armored personnel carriers, battle tanks, helicopters, rifles, and gas masks. Operated and maintained by the Army’s Logistics Support Activity (LOGSA), CBSX furnishes the Army with an official inventory figure used to assess the overall preparedness of the force, determine the validity of unit equipment requisitions, distribute/redistribute equipment throughout the Army, and maintain worldwide asset visibility of deployed assets. As a result, if CBSX equipment balances are overstated, the Army may procure too few items, possibly resulting in reduced readiness. Conversely, if CBSX equipment balances are understated, the Army may procure too many items, potentially creating excess and wasting financial resources that could have been otherwise used to maintain and improve readiness. Moreover, Army planners and logisticians use equipment balances originating from CBSX to redistribute equipment to deploying units and estimate secondary item and other requirements to sustain this equipment. Therefore, if unit equipment balances are misstated, mobilization and deployment planning could be more difficult and inefficient. CBSX covers over 9,300 National Stock Numbers, which are primarily major items but also include other selected items, such as medical equipment, for which the Army requires worldwide visibility. CBSX seeks to mirror the official accountable records of equipment balances, such as property book records, held by various types of Army activities, including divisions subject to deployment, depots that repair or upgrade equipment, and storage sites. As of September 30, 1996, CBSX contained information on 13.5 million items whose reported value was over $116 billion. While some of this property is held at wholesale distribution centers, such as depots, the vast majority of these items, valued at about $94 billion, are maintained at the retail level. Of this retail equipment, about 80 percent, valued at a reported $75 billion, was accounted for by units that use the Standard Property Book System - Redesign (SPBS-R), an automated property book system, which is maintained by the U.S. Army Information Systems Software Development Center, Fort Lee, Virginia (see figure 1). Since CBSX is the Army’s centralized equipment asset visibility system, the Army plans to use it as a primary source for supplementary stewardship information, as prescribed by the Statement of Federal Financial Accounting Standard No. 8, Supplementary Stewardship Reporting. Beginning in fiscal year 1998, this standard requires agencies with federal mission property, plant, and equipment to disclose the value and condition of these assets as supplemental stewardship information. The standard specifically includes military weapons as federal mission property, plant, and equipment. In the past, military equipment has been misstated on the Army’s financial statements. For example, according to AAA, the Army’s fiscal year 1996 financial statements misstated its property, plant, and equipment by a material but unknown amount and major problems with the processes used to report and value military equipment precluded AAA from attesting to the reported value of military equipment. Army regulations require all activities to maintain accurate property books and ensure that they agree with CBSX. However, past audits by AAA and GAO found that CBSX balances for equipment items fluctuated for reasons that responsible officials could not explain, differed from records maintained by the units possessing equipment, and were substantially inaccurate for equipment in transit between units (see footnote 1). Moreover, a January 1992 Army Materiel Command Lessons Learned report on Operation Desert Storm demonstrated that inaccurate or unreliable CBSX data (1) hampered equipment distribution decisions, resulting in some deployed units receiving equipment in excess of their authorizations while others were short critical equipment, (2) delayed the distribution of major items to units that did not deploy to Southwest Asia, thus diminishing the readiness of those units, and (3) significantly affected efforts to identify major items that required accelerated procurement. SPBS-R is a stand-alone personal computer system operated independently at over 2,000 Army units. Figure 2 illustrates the three methods units can use to provide SPBS-R data to CBSX: (1) downloading data to diskettes that are hand-carried to another computer, which transmits the data to CBSX, (2) transmitting via modem from the property book computer directly to CBSX, and (3) downloading data to diskettes that are mailed to LOGSA where the data is loaded into CBSX. For submissions provided electronically to CBSX (methods 1 and 2 above), the system transmits a confirmation of receipt that contains the total number of transactions received by CBSX. In addition, listings of transactions that affect unit balances are printed by CBSX and mailed to the units by LOGSA monthly. Figure 3 shows the three types of SPBS-R data units sent to CBSX during the time the adjustments in our review were made: catalog data, transaction data, and validation data. Catalog Data: The Army provides units with an updated automated catalog semiannually that designates which supply items in SPBS-R are reportable to CBSX. When a unit runs the catalog update process in SPBS-R, the system generates a listing of the unit’s equipment balances for items that have become reportable due to catalog changes. Units are supposed to transmit these balances to CBSX, which, in turn, records the new balances. Transaction Data: Units are required to transmit their SPBS-R equipment transactions (such as additions and transfers) to LOGSA at least monthly to update CBSX. If these transactions pass various edits to detect common types of errors, CBSX updates unit asset balances. Validation Data: Units transmit SPBS-R balances to CBSX (called validation data) twice a year. As part of the validation process, CBSX compares these SPBS-R balances to CBSX balances, identifies discrepancies, and adjusts the CBSX balances to agree with SPBS-R. CBSX is adjusted to agree with SPBS-R because SPBS-R is the Army’s official accountable record. As also shown in figure 3, in September 1996, after the time frame of the adjustments that we reviewed, SPBS-R was changed to allow units to begin providing SPBS-R unit identifier data to CBSX. Both CBSX and SPBS-R contain unit identifier data, which are used to ensure that unit transactions are posted to the proper accounts. If SPBS-R and CBSX unit identifier data are inconsistent, property book transactions will be either rejected by CBSX or posted to the wrong accounts. In the past, CBSX and SPBS-R unit identifier data have been inconsistent, which has led to differences between the two systems. Consequently, in the new process, CBSX compares the data from the SPBS-R unit file to the unit identifier data in CBSX and provides LOGSA analysts with a report of differences for review. After the validation process, LOGSA calculates a compatibility rate, which measures the extent to which CBSX and the unit records agree. According to the CBSX user manual, Army headquarters adopted the compatibility rate as the yardstick to measure the degree of property book officer compliance with CBSX asset reporting requirements. To determine the primary causes for adjustments to correct discrepancies between CBSX and SPBS-R, we analyzed a statistically projectable sample of 150 adjustments from our sample universe of 32,649 adjustments. The sample was selected to identify common, recurring problems that caused adjustments to CBSX between January 1996 and August 1996 as a result of the validation process. We chose this time period in order to cover a complete validation period, from the time units submitted validation balances until the semiannual CBSX validation process was completed. We excluded adjustments that were made during the conversion of manual property books to SPBS-R because we considered these adjustments as nonrecurring. We also excluded adjustments to non-equipment National Stock Numbers, such as clothing. Our analysis consisted of reviewing applicable SPBS-R reports, such as the CBSX Transaction Listing, and CBSX reports, such as the Proof of Shipment report. We also provided documentation to, and discussed the results of our analysis with, applicable property book officers, LOGSA officials, and/or Software Development Center, Fort Lee, officials and reached consensus with these officials about the causes of adjustments. Appendix I identifies the various Army activities that were part of our sample. We also interviewed officials from the Office of the Deputy Chief of Staff for Logistics, LOGSA, and the Army Quartermaster Center and School. To determine if the Army’s improvement efforts adequately address the causes of CBSX errors, we reviewed and analyzed the Army’s plans and related documentation. We also interviewed LOGSA, Software Development Center at Fort Lee, and contractor officials. To determine whether the CBSX compatibility rate is an adequate measure of performance, we reviewed LOGSA compatibility reports, which quantify the extent to which CBSX agrees with unit records, analyzed LOGSA’s methodology for calculating the rate, and interviewed LOGSA officials. We conducted our review between July 1996 and October 1997 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Defense or his designee. On December 18, 1997, the Army’s Deputy Chief of Staff for Logistics provided us with written comments, which are discussed in the “Agency Comments and Our Evaluation” section and are reprinted in appendix II. From January 1996 through August 1996, LOGSA made more than 32,000 adjustments to CBSX to bring it into agreement with SPBS-R balances. We reviewed a representative sample of 150 adjustments and identified the causes of 124 of them. The adjustments in our sample covered differences between CBSX and SPBS-R for items such as M-16 rifles, night vision goggles, howitzers, and cargo trucks. As shown in figure 4, the principal causes of adjustments were transactions not received by CBSX, software problems, and erroneous transactions posted by LOGSA or units. We could not determine the causes for 26 of the adjustments primarily because units had not retained all of the records to establish the audit trail needed to perform the analysis. In most cases, Army regulations did not require units to retain the records needed to determine the underlying causes for these adjustments or the Army’s record retention period had expired. For example, the CBSX confirmations of receipt are not required to be retained and the SPBS-R manual requires that one critical SPBS-R report, the listing of transactions reportable to CBSX, be retained for only 60 days after the validation process. Also, in some cases, units did not retain records in accordance with Army regulations. Specifically, units are required to retain the inactive document register (a listing of all archived transactions that were posted to the property book) for 2 years but units could not find these documents in 12 cases. If units do not determine the causes of their adjustments prior to discarding the records needed to assess these causes, the Army is left with little information on specific causes of these adjustments so that corrective actions can be taken to prevent their recurrence. The following sections provide additional detail for each of the causes of adjustments identified. Army regulations require that LOGSA and Army Major Commands ensure that units submit complete and accurate data to CBSX. However, our sample of 150 adjustments found that 64 (43 percent) occurred because SPBS-R transactions were not received by CBSX and thus were posted to SPBS-R, but not CBSX, causing discrepancies between CBSX and SPBS-R balances. We could not determine with certainty whether property book officers failed to send their transactions to CBSX or if some other event in the process prevented CBSX from receiving the transactions because neither LOGSA nor the units had effective monitoring processes to ensure that transactions were sent and received in a timely manner. Examples include the following. Army Regulation 710-3, Asset and Transaction Reporting System, requires LOGSA to (1) ensure that activities submit CBSX input data by the date scheduled and that the data are correct and (2) take appropriate follow-up action if data are not accurate or submitted by the scheduled date. However, LOGSA had neither scheduled dates for units to report their CBSX transactions nor kept a log or schedule of expected transmissions. Therefore, without such reporting schedules, LOGSA could not detect when units failed to submit transactions and the major commands lacked appropriate data to measure unit compliance. Moreover, according to LOGSA’s analysts, they have not routinely followed up on and corrected rejected transactions. In addition, while LOGSA requested that units submit transactions weekly (exactly when transactions are submitted is left to the discretion of the unit), in practice, we found that reporting frequency varied greatly. Some units reported several times a week while others did not report to CBSX for months. Although LOGSA transmits to units confirmations of receipt that contain the total number of transactions received, neither Army regulations nor their implementing guidance require units to verify the total, investigate discrepancies, and retain the receipt confirmations. Property Book Officers for several units in our sample told us that while they review and retain for various periods the LOGSA receipt confirmations, they do not always compare these confirmations with their lists of CBSX-reportable transactions to ensure that all data transmitted were received by CBSX. Moreover, in cases where units provided their data to a central collection point which transmitted data to CBSX (see figure 2), the central collection point often did not provide the units with the receipt confirmation. Unless units verify and retain their receipt confirmations, they are unlikely to discover that transactions were not received by CBSX and therefore will not be in a position to take corrective action such as retransmitting the transactions. Also, the SPBS-R report that lists the CBSX-reportable transactions—which units could use to verify the total number of transactions transmitted—does not display a total. Therefore, when large numbers of transactions are sent, verifying the CBSX confirmation totals of transactions received can be onerous because property book officers must manually count the total number of SPBS-R reportable transactions. This can involve hundreds of transactions. LOGSA mails to each property book officer monthly reports showing transactions posted to CBSX. LOGSA expects property book officers to review these reports to ensure that their property book transactions were posted to CBSX and to review transactions rejected or questioned by CBSX edit checks. By performing this procedure, the property book officer could detect when reportable transactions were not received by CBSX or other problems that prevented transactions from posting to CBSX. However, many property book officers we interviewed told us that they did not perform this review process. Further, this process can be burdensome. Specifically, (1) these reports contain all transactions submitted throughout the month (which can involve hundreds of transactions) rather than reporting exceptions separately and (2) transactions with errors may not be readily identifiable. Further, the error codes in these reports are difficult to interpret. For example, the report does not define the error-type codes it contains. Thirty-five adjustments (23 percent) in our sample were caused by problems in CBSX or SPBS-R software. Because of these software errors, CBSX (1) posted incorrect adjustments, (2) posted invalid transactions, or (3) did not post valid transactions. In some of these cases, errors in the validation process itself created inaccurate balances in CBSX that were not corrected until LOGSA conducted its next 6-month validation process. The following were the specific software problems found. Twenty-one adjustments were caused by a software error in the CBSX validation process which resulted in invalid adjustments being posted to unit balances. The CBSX validation process compares the unit’s SPBS-R equipment balances to the unit’s adjusted equipment balances in CBSX and changes the CBSX balances to mirror the SPBS-R balances. For validation purposes, the CBSX equipment balance is adjusted to remove transactions received after the unit’s validation cutoff date, which is the date when each unit runs the SPBS-R validation process for submission of balance data to LOGSA. This process is necessary to account for timing differences between the dates the units and CBSX ran their respective validation processes. However, when a unit’s asset balances were reduced to zero in CBSX by transactions subsequent to the unit validation date, the CBSX validation process did not adjust the unit’s equipment balances for these transactions. Therefore, in these cases, the unit’s unadjusted CBSX balances were compared to the semi-annual SPBS-R balance data, which caused CBSX to post erroneous adjustments, record inaccurate Army unit equipment balances, and report inaccurate unit compatibility rates. For example, in one sample case, a unit submitted to CBSX a validation balance of seven for a particular equipment item as of April 25, 1996, and CBSX ran its validation process on June 4, 1996. That unit also submitted a transaction for that item on May 17, 1996, which reduced the balance for the equipment item in CBSX to zero. A software problem caused CBSX not to add back this transaction in order to calculate the adjusted CBSX balance (which is compared to the SPBS-R validation data). As a result, the CBSX validation process adjusted CBSX to agree with the April 25, 1996, SPBS-R validation balance, thereby overstating the unit’s balance for that particular equipment item by seven. After we brought this problem to LOGSA’s attention, they completed a software change to fix the problem. Five adjustments were due to CBSX rejecting valid property book transactions because edit processes incorrectly identified them as duplicate transactions. For example, if the unit corrected an error in a transaction (i.e., if the wrong serial number was entered) and the original and corrected transaction was sent in the same submission to CBSX, CBSX would reject one of the transactions as a duplicate. The Army was aware of this software problem. To fix it, in 1997, the Software Development Center, Fort Lee, modified SPBS-R to add new data fields to the SPBS-R input to CBSX that will include the date and time of transactions. These SPBS-R software modifications, along with planned modifications to CBSX to use these data, are expected to correct this problem. Four adjustments were caused by software errors in SPBS-R that resulted in invalid transactions being posted to CBSX. Three adjustments occurred when a software error in SPBS-R caused it to report a wrong activity address code to CBSX. When a unit is reorganized and transfers assets to a different unit identification code, the unit inputs the gaining and losing units’ unit identification codes in SPBS-R, which uses them to automatically record both the gaining and losing units’ Department of Defense Activity Address Codes. However, a software problem in SPBS-R caused the system to assign the gaining unit’s activity address code to the losing unit. As a result, the wrong activity address code was reported to CBSX, which caused CBSX to mistakenly post the loss transaction to the gaining rather than losing unit. Neither LOGSA nor the Software Development Center, Fort Lee, were aware of this SPBS-R software problem. The CBSX Project Manager told us that this error is a significant problem, particularly during times of frequent deployments when these types of transactions are common. The fourth adjustment caused by a software error in SPBS-R occurred when a unit incorrectly posted a transaction to reverse a prior transaction. While SPBS-R edits prevented the transaction from updating the SPBS-R asset balance, SPBS-R did not reject this transaction, instead passing along the incorrect reversal transaction as a valid CBSX reportable transaction. While researching another adjustment (which was caused by a transaction not received by CBSX), we found a second unit that performed an incorrect reversal transaction. Neither LOGSA nor the Software Development Center, Fort Lee, were aware of this SPBS-R software problem. An SPBS-R analyst stated that a software change would correct this problem. Because the Software Development Center, Fort Lee, plans to replace SPBS-R, additional software improvements are not being made to SPBS-R except for changes related to the Year 2000 problem. However, the errors in SPBS-R discussed in this section, which caused CBSX to have incorrect asset balance data, could be fixed in conjunction with the planned modification to SPBS-R to correct the Year 2000 problem. Three adjustments caused by the catalog update process resulted in invalid transactions being posted to CBSX. As previously discussed, when a unit runs the catalog update process in SPBS-R, the system generates a listing of the unit’s equipment balances for newly reportable items, which the unit is supposed to transmit to CBSX. CBSX then records these new balances as part of the unit’s asset balance. In three cases, units transmitted catalog balances for items with existing CBSX balances. As a result, CBSX added the newly reported and existing balances together, thus overstating the units’ equipment balances in CBSX. We brought this problem to the attention of the CBSX Project Manager, who stated that LOGSA would fix this problem by adding an edit to the CBSX catalog update process to check whether the unit had a preexisting balance for the catalog item. Two adjustments were caused when CBSX erroneously posted transactions. In these cases, transactions were processed in SPBS-R prior to the unit running the validation process (therefore the transactions were included in the unit’s validation balances) but were not received and processed by CBSX until after LOGSA ran the CBSX validation process. According to a LOGSA programmer, this error occurred because CBSX was reading the incorrect validation date. The programmer further stated that LOGSA had discovered and, in early 1996, fixed this error. Fourteen adjustments (9 percent) were caused by LOGSA actions. For 13 of these adjustments, LOGSA analysts posted erroneous transactions to unit asset balances in CBSX. LOGSA analysts can manually enter transactions in CBSX to adjust unit asset balances. Analysts input these transactions when units notify LOGSA of changes to their unit identification or activity address codes or when analysts identify cases where unit property book transactions were not posted correctly in CBSX. For example, when a unit requested that LOGSA change its unit identifier codes, LOGSA analysts often also transferred the asset balances for the affected units without investigating whether the units had already submitted the appropriate SPBS-R unit transfer transactions. Therefore, if the unit had performed the transfer transactions in SPBS-R and submitted this data to CBSX, the LOGSA-generated transaction doubled the unit’s asset balances in CBSX. The final adjustment in this category occurred because LOGSA did not update the unit identifier data in CBSX in a timely manner. The CBSX Project Manager agreed that LOGSA-generated transactions can cause adjustments and said that LOGSA should only make these transactions when units do not submit SPBS-R unit transfer transactions. The Project Manager said that LOGSA plans to institute an internal review process to approve LOGSA-generated changes to unit balances. This process will include determining whether units have submitted the appropriate unit transfer transactions. However, unless this process includes coordinating with the applicable unit prior to making changes to unit asset balances, units could still submit duplicate unit transfer transactions at a later time. Fourteen adjustments (9 percent) we reviewed were caused when units incorrectly entered property book transactions. Some of these transactions related to unit reorganizations that caused a lack of synchronization between unit identifier data in CBSX and SPBS-R. Other incorrectly entered transactions were due to various other errors such as the unit entering an invalid unit identification code. Errors such as these can be reduced by placing additional emphasis on training, which we discuss in the next section. Deployment situations often cause unit reorganizations. As a result of these reorganizations, new unit identifier codes are created and existing unit assets are moved to these new unit identifier codes. These changes are made in order to maintain asset accountability when units are deployed. Therefore, if CBSX and SPBS-R do not contain the same unit identifier data, visibility over these assets is lost. For example, one unit’s reorganization adjustments occurred as a result of its deployment to Haiti. During the deployment, the unit transferred a large number of its assets to another property book in Haiti. However, CBSX did not recognize this transfer because the unit did not follow the designated procedure for posting to a new unit identification code established for the deployment. As a result, when the unit submitted its validation balances, which no longer included the assets deployed to Haiti, CBSX deleted those assets in order to match the property book balances submitted. These deleted assets, which included 6 trucks, 12 ambulances, 13 pistols, and 62 M-16 rifles, remained unreported in CBSX until the next validation was performed 4 months later—after the deployed equipment had been transferred back to the unit’s original property book. In another example of an adjustment caused by a unit reorganization, a unit property book officer attempted to transfer assets between two Army companies. However, the property book officer performed this transfer incorrectly, which resulted in both companies’ assets being incorrectly combined in CBSX in one company’s account. Other incorrectly entered transactions were due to a variety of circumstances such as the unit entering an invalid unit identification code. For example, in one case, the unit incorrectly used a unit identification code assigned to another unit, which resulted in transactions being incorrectly posted to the other unit’s account. In addition to the specific cause of each adjustment, we believe other underlying factors contributed to CBSX not being compatible with SPBS-R. These factors primarily related to the lack of reconciliations, outdated and unclear regulations, and the lack of training. First, although Army Regulation 710-3, Asset and Transaction Reporting System, requires all activities to maintain accurate asset balances in CBSX, the regulation is unclear about the respective roles of LOGSA and the property book officer for reconciling automated property books with CBSX and refers to a reconciliation process that LOGSA no longer conducts. Instead, LOGSA’s practice is to adjust the CBSX database to agree with SPBS-R without a detailed analysis of the causes for these adjustments. In addition, as previously discussed, many units do not retain the documents, such as the inactive transaction register and the receipt confirmations, that would be necessary to perform such reconciliations. As a result, the Army has little of the information it would need on specific causes of adjustments to take corrective actions to prevent their recurrence. Moreover, reconciliations could detect instances where CBSX balances were incorrectly changed. For example, as previously discussed, 21 adjustments in our sample (14 percent) resulted from software problems that led to erroneous adjustments that caused CBSX balances to be incorrect. In addition, Army record retention periods with respect to CBSX contain time frames associated with the validation process that may not be sufficient to support a reconciliation process. Procedures only require that units retain applicable records until a period of time subsequent to the time when adjustments are processed. Second, Army Regulation 710-3, which contains requirements for reporting to CBSX, has not been updated since May 1992 and does not reflect current CBSX reporting processes. For example, the regulation requires property book officers to report to CBSX once a month, whereas LOGSA requests weekly reporting. In addition, the regulation does not require confirming LOGSA’s receipt of the unit’s property book transactions (i.e., using the confirmation of receipt report). According to Department of Army officials, Army Regulation 710-3, Asset and Transaction Reporting System, is under revision and is scheduled to be completed shortly. We reviewed the draft regulation and it includes new requirements such as requiring unit data to be submitted to LOGSA weekly during peacetime and daily during wartime. However, the draft regulation does not require units to (1) verify their receipt confirmations and research any differences, (2) perform reconciliations of the differences discovered during the validation process, (3) review the monthly reports they receive from LOGSA, or (4) follow up on transactions that were rejected by CBSX. Finally, we found that several issues related to training contributed to the problems we identified. For example, property book officers in about 12 percent of the units in our sample, primarily those in the Army Reserves and Army medical activities, had received no formal training on how to operate SPBS-R. For example, one property book officer did not know he was required to send transactions to CBSX. In addition, the Army’s SPBS-R training does not cover analyzing CBSX reports such as confirmations of receipt and monthly transaction listings. According to Army training officials at the Army Quartermaster Center and School, these subjects are not covered because receiving these reports from LOGSA cannot be simulated and other topics would not be covered if CBSX reporting was emphasized. The CBSX Project Manager stated that the Office of the Deputy Chief of Staff for Logistics was responsible for working with the Quartermaster Center and School to obtain additional emphasis on CBSX. Officials from the Office of the Deputy Chief of Staff for Logistics stated that they had not started this effort. In its comments on the draft report, the Army stated that the Office of the Deputy Chief of Staff for Logistics would request that SPBS-R training be revised to incorporate how to analyze CBSX reports and confirmations of receipt. Another training issue related to LOGSA’s annual conference. Property book officers at 16 units in our sample did not attend the conference, and other methods to disseminate the training provided at this conference, such as videotape, were not employed. LOGSA’s annual conferences are an important mechanism to obtain information on current CBSX processes and problems. Accordingly, it may be beneficial to videotape and distribute the tapes of these conferences to the property book officers not in attendance. In its response to the draft report, the Army stated that LOGSA is developing a training video based on its annual conference that will be used as a training aid for property book officers. Also, in 1994, LOGSA discontinued site visits to property book officers to provide technical assistance and training. LOGSA’s CBSX Project Manager stated that, in 1997, CBSX analysts began performing site visits and that they would continue these visits as funding permits. However, LOGSA has not established a formal site visit program to visit sites with low CBSX compatibility rates. LOGSA and the Software Development Center, Fort Lee, have initiated a CBSX improvement effort to correct problems in keeping the CBSX asset balances current and compatible with SPBS-R. This improvement effort contains worthwhile initiatives. At the same time, the modifications being made under this improvement effort will not correct many of the causes of adjustments to CBSX that we identified. In particular, adjustments caused by transactions not received by CBSX, the largest problem, will not be corrected unless additional efforts are made. In 1995, the Army established an Improvement Team (which included representatives from LOGSA and the Software Development Center, Fort Lee) to develop initiatives to improve data accuracy in CBSX. According to the team, among the most significant contributing factors to CBSX inaccuracies were (1) the lack of synchronization of unit identifier data between CBSX and SPBS-R, (2) data lag time in reporting and update processes, and (3) nonsubmission or incomplete reporting. These contributing factors were consistent with some of the previously discussed causes of sample adjustments, such as incorrectly posted transactions that resulted from unit reorganizations that caused a lack of synchronization between unit identifier data in CBSX and SPBS-R. In September 1996, the Army awarded a contract to address the problems the Improvement Team had identified that could be corrected at LOGSA. To fix the CBSX problems identified by the Improvement Team, the Software Development Center, Fort Lee, and LOGSA (and its contractor) initiated several improvement efforts. For example, to fix the lack of synchronization between CBSX and SPBS-R unit identifier information, the Software Development Center, Fort Lee, and a LOGSA contractor modified SPBS-R and CBSX, respectively. In the case of SPBS-R, a July 1997 change allowed units to download the CBSX Customer Identification Control File (which includes the unit identification and activity address codes) that SPBS-R uses to edit transactions. This edit causes units to receive an automated notice when they enter an unrecognized unit identifier code. However, units can override this edit and the frequency that units update the CBSX unit file in SPBS-R is at the discretion of the unit. As part of a September 1996 SPBS-R change, units now transmit unit identification and activity address codes to CBSX. In May 1997, the LOGSA contractor completed a CBSX modification that compares the CBSX and SPBS-R unit identification and activity address codes and provides reports of differences to LOGSA analysts. According to the CBSX Project Manager, desk procedures will be written requiring LOGSA analysts to resolve these differences. LOGSA has recognized that CBSX had problems maintaining current information because updates were too infrequent. To address this data lag problem, in October 1997, LOGSA’s CBSX contractor completed a CBSX modification to allow more frequent SPBS-R batch updates to the CBSX asset balances. In addition, the contractor, in conjunction with LOGSA programmers, is also implementing an automated error correction process. Currently, units receive information on rejected transactions in hard copy reports that are mailed to the units monthly and there is no automated mechanism for units to resubmit corrected transactions to CBSX. Under the automated error correction process being developed, CBSX would electronically transmit rejected transactions to applicable SPBS-R users, who would be expected to correct and retransmit these transactions to CBSX, where applicable. To be effective, this unit error correction process should be combined with LOGSA follow-up to ensure that rejected transactions are corrected and resubmitted. The CBSX modification, which will encompass reporting CBSX transaction errors electronically to the units, is expected to be completed shortly. However, in order for units to correct these errors electronically, LOGSA will have to modify another system—LOGSA’s Distribution Execution System—that it uses to obtain SPBS-R data. The CBSX Project Manager stated that a time frame for modifying the Distribution Execution System has not been set. These improvements are worthwhile and will improve the accuracy of CBSX when combined with changes LOGSA and the Software Development Center, Fort Lee, have made or agreed to make to fix software problems and erroneous LOGSA transactions previously discussed. However, we remain concerned that the main cause of CBSX adjustments, transactions not received by CBSX, will continue to be a problem. As illustrated in figure 2, CBSX and SPBS-R are not integrated and, therefore, CBSX will continue to rely on units to submit data in a timely manner. As previously discussed, our review of adjustments caused by transactions not received by CBSX indicated that the Army’s processes were neither adequately controlled nor documented to ensure that all transactions were transmitted to CBSX. This is consistent with the CBSX Improvement Team’s finding that nonsubmission or incomplete reporting was a significant contributing factor to CBSX inaccuracies. LOGSA’s CBSX contractor is developing a report identifying units that have not submitted data in a given period which will be provided to CBSX analysts for follow up action. However, this report would not identify transactions that were not received by CBSX if the unit had other CBSX transmissions received during the period covered. Therefore, to eliminate the adjustments caused by transactions not received by CBSX, this LOGSA report would need to be coupled with other control mechanisms, such as unit review and reconciliation of confirmations of receipt and reconciliations of differences between CBSX and SPBS-R data. The Army’s initiatives to improve CBSX discussed in the previous section are intended to help the Army achieve its management goal of a 98-percent compatibility rate, which the Army uses to measure the extent that CBSX and property book records agree. However, the Army’s current method of calculating this rate is flawed and until this method is changed, the Army will not know whether its improvement efforts will achieve its 98-percent goal. Moreover, the compatibility rate is an incomplete indicator of CBSX performance because it does not address other types of measurements, such as the frequency of unit submissions. LOGSA plans to implement other types of performance measures. As of July 1997, the Army reported an Army-wide CBSX compatibility rate of about 92 percent. However, that rate is overstated because LOGSA assigns a 100-percent compatibility rate to those units where (1) LOGSA believes the validation adjustments were not the fault of the local property book officer (such as cases where a unit incorrectly posted a transaction to another unit’s account) or (2) the validation adjustments occurred when the unit converted from a manual system to SPBS-R. If these units were factored into the compatibility rate, the Army-wide rate would fall to about 87 percent. In addition, if a unit does not submit current balances for validation, the Army continues to report the unit’s prior compatibility rate, which can be several years out of date, thus distorting the Army-wide compatibility rate. In the April 1997 validation process, 191 reporting entities in the Active Army and Army Reserve did not provide validation data to LOGSA. As of March 1997, there were 1,096 Active Army and Army Reserve entities reporting to CBSX, meaning that current performance data were unavailable for over 17 percent of these entities. In addition, even if the compatibility rate measured all current differences between CBSX and unit property books, it does not serve as a complete indicator of CBSX accuracy. The compatibility rate does not measure (1) the degree to which CBSX agrees with non-property book systems, such as those that account for wholesale-level assets and (2) errors associated with equipment in-transit between locations. The in-transit exclusion is significant, since in June 1996, the Army Audit Agency reported a 69-percent error rate in CBSX balances of in-transit assets resulting from problems with system interfaces, duplicate unit identification codes, redirected shipments, shipment performance notification procedures, and document number changes (see footnote 1). LOGSA also does not measure other indicators of performance, such as the timeliness of unit transaction submissions. LOGSA has drafted proposed additional CBSX performance measures, such as timeliness of unit submissions and frequency of errors, which it plans to implement shortly. We believe that these additional performance measures are more indicative of compliance with CBSX reporting requirements than the compatibility rate alone. Further, if implemented, these measures could be used to help evaluate property book officers’ and their commanders’ performance. However, the proposed performance measures do not include a measure of LOGSA and Army units’ abilities to successfully close in-transit transactions, which is needed to measure the Army’s progress in reducing its 69-percent in-transit error rate. Moreover, the proposed measures do not include a measurement of planned new processes resulting from the Army’s CBSX improvement effort, such as the planned error correction process. Such performance indicators could include measuring the timeliness of units in correcting transaction errors. Until the Army addresses the major causes for CBSX adjustments, its system for providing worldwide asset visibility for major equipment assets will continue to contain inaccurate, untimely, and incomplete data, which may cause erroneous monitoring of equipment status and improper equipment acquisition or redistribution decisions. Financial statements will also continue to be misstated. To its credit, LOGSA, both at its own initiative and as a result of our bringing previously unknown problems to its attention, plans to make software and process changes to address many of the causes of CBSX adjustments. However, these changes do not address the primary cause of CBSX adjustments—transactions not received by CBSX. The responsibility for ensuring that CBSX contains accurate, timely, and complete data rests jointly with property book officers under the Army’s major commands and LOGSA. However, neither the major commands nor LOGSA have established adequate processes to ensure that property book officers correctly report all transactions. Accordingly, the Army’s property book officers do not ensure that all reportable transactions are received by CBSX or identify specific causes of validation adjustments so that corrective actions can be taken to prevent their recurrence. To ensure that CBSX receives applicable SPBS-R transactions, we recommend that the Secretary of the Army ensure that LOGSA establish a standard SPBS-R reporting schedule and follow up on missing submissions; the major commands require property book officers to, following each data transmission to CBSX, (1) compare the total number of SPBS-R transactions transmitted to the LOGSA confirmation of receipt, (2) investigate and resolve discrepancies, and (3) retain the confirmations; the Software Development Center, Fort Lee, add a total line to the SPBS-R CBSX reportable transaction report to readily permit it to be matched to the CBSX receipt confirmation; and LOGSA redesign CBSX reports to unit property book officers to make them more user friendly, such as by providing exception reports with easily understood error codes. To correct software problems in CBSX and SPBS-R causing incompatibilities between the two systems, we recommend that the Secretary of the Army ensure that LOGSA proceed with its planned modification to CBSX to correct the adjustments that were caused by valid transactions being incorrectly rejected as duplicate transactions; the Software Development Center, Fort Lee, add edits to SPBS-R software to prevent (1) SPBS-R from reporting incorrect activity address codes for unit transfer transactions and (2) incorrect reversal transactions; and LOGSA add edits to CBSX software to identify instances where units submit catalog beginning balances for items that have an existing balance in CBSX. To prevent inaccurate transactions from being posted to unit accounts in CBSX by LOGSA, we recommend that the Secretary of the Army ensure that, prior to LOGSA modifying unit data in CBSX, LOGSA proceed with its planned implementation of an approval and documentation process which should include coordinating with applicable units before making changes to unit balances. To improve the transaction audit trail and enhance unit understanding of CBSX reporting, we recommend that the Secretary of the Army ensure that LOGSA update Army Regulation 710-3, Asset and Transaction Reporting System, to require units to (1) verify their confirmations of receipt and research and resolve any differences, (2) reconcile differences between property books and CBSX and investigate reasons for adjustments, (3) retain property book transaction records (including receipt confirmations) relating to CBSX to determine the causes of adjustments and support the reconciliation, (4) review the monthly reports they receive from LOGSA, and (5) follow up on transactions that were rejected by CBSX; the major commands require that all property book officers using SPBS-R, including those assigned to medical and Reserve units, successfully complete SPBS-R training; the Army Quartermaster Center and School revise the SPBS-R training to include how to analyze CBSX confirmations and monthly reports; the major commands enhance property book officer training by requiring ongoing and up-to-date CBSX training such as that provided by LOGSA’s annual CBSX conference or, alternatively, LOGSA videotape its annual CBSX conference and provide on-site training using this tape at Army units to train those unable to attend the CBSX conference; and LOGSA establish a formal site visit program to conduct periodic assistance/training for property management personnel. To improve the effectiveness of LOGSA’s plans to improve CBSX, we recommend that the Secretary of the Army ensure that LOGSA (1) proceed with the planned development of desk procedures to require LOGSA analysts to resolve differences between CBSX and SPBS-R unit identification and activity address codes, (2) require its analysts to follow up on rejected transactions to ensure that they are corrected, and (3) modify the Distribution Execution System to allow units to correct and resubmit rejected CBSX transactions. To improve the effectiveness of CBSX performance measurement, we recommend that the Secretary of the Army ensure that LOGSA calculate the Army-wide CBSX compatibility rate based on all differences between property books and CBSX; LOGSA proceed with the planned implementation of additional CBSX performance measures and (1) develop and implement CBSX performance indicators that measure LOGSA and Army unit abilities to successfully close in-transit transactions and the timeliness of corrections of unit transaction errors and (2) provide results to Army major commands for their use in evaluating the property book function; and the major commands include performance measurement data related to CBSX, such as the timeliness and accuracy of transaction submissions, in overall commander and property book officer performance criteria. In commenting on a draft of this report, the Army stated that it concurred with the intent of all of the recommendations and that it will do all that it can in as timely a manner as possible to satisfy them. In particular, the Army stated that some of the recommendations will be implemented when the CBSX Improvement Plan, Phase I, is completed in April 1998. To address several other recommendations, the Army plans to produce, and seek funding for, a CBSX Improvement Plan, Phase II. In addition, the Army stated that it plans to meet in February 1998 to determine what can be done to satisfy our recommendations with current resources while funding is being sought to implement the CBSX Improvement Plan, Phase II. The Army partially concurred with two of our recommendations related to modifying SPBS-R. These modifications are necessary to correct software errors that caused incorrect data to be reported to CBSX. The Army plans to replace SPBS-R with the Integrated Combat Service Support System, which Army stated will be a seamless, integrated retail supply system that will combine the functions of several existing systems. Because it plans to replace SPBS-R, the Army has decided not to modify SPBS-R, except for changes pertaining to the Year 2000 problem. However, the Army said that software change requests will be submitted to incorporate our recommendations into the Integrated Combat Service Support System. This system is currently scheduled to be fielded by the end of fiscal year 2003, although the Army stated that if funding is accelerated, it will be fielded by the end of fiscal year 2001. In addition, the Army stated that LOGSA, the Office of the Deputy Chief of Staff for Logistics, and property book officers will meet before April 1998 to determine if there are any workarounds that can be implemented to accomplish these recommendations. Because the errors in the SPBS-R software cause inaccuracies in CBSX, which the Army uses to monitor the equipment readiness of its warfighting units and fill equipment shortages, these errors must be corrected expeditiously. We are particularly concerned with the software error that caused SPBS-R to report the wrong activity address code to CBSX during unit reorganizations, which in turn caused incorrect unit balances in CBSX. The CBSX Project Manager told us that this error is a significant problem, particularly during times of frequent deployments when such transactions are common. Army planners and logisticians use equipment balances originating from CBSX to redistribute equipment to deploying units; therefore, inaccurate unit equipment balances in CBSX could hinder the Army’s assessment of the equipment needs of the deployed unit. We support the Army’s plans to try to develop workarounds to accomplish the goals of our recommendations. If the Army can develop effective workarounds to use until the Integrated Combat Service Support System is fielded, then it can avoid modifying SPBS-R. However, if the Army determines that such workarounds cannot be developed, it must modify SPBS-R software promptly because the Integrated Combat Service Support System may not be fielded until 2003. While we did not independently estimate the effort required to correct the SPBS-R errors, an October 1997 Software Development Center, Fort Lee, proposal to modify SPBS-R to fix the most significant problem—the software error that caused SPBS-R to report the wrong activity address code to CBSX during unit reorganizations—included a recommended solution which indicated that only a minor software modification was needed. Therefore, while modifying SPBS-R to fix the Year 2000 problem, the Software Development Center, Fort Lee, could also correct the errors found in our review without significantly impacting Army’s plans to ensure that SPBS-R is Year 2000 compliant. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services, the House Committee on National Security, the Senate and House Committees on Appropriations, the Senate Committee on Governmental Affairs, the House Committee on Government Reform and Oversight, and the Secretary of Defense; and the Director of the Office of Management and Budget. Copies will be made available to others upon request. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight within 60 days of the date of this report. You must also send a written statement to the House and Senate Committees on Appropriations with the agency’s first request for appropriations more than 60 days after the date of this report. Please contact me at (202) 512-9095 if you or your staff have any questions concerning this letter. Major contributors to this report are listed in appendix III. James D. Berry, Jr., Senior Evaluator Ronald M. Haun, Senior Evaluator Cary B. Russell, Senior Auditor Elaine C. Coleman, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the Army's Continuing Balancing System-Expanded (CBSX) logistics system, focusing on: (1) the primary causes for the numerous adjustments to correct discrepancies between the Army's CBSX system and its primary property book system; (2) whether the Army's ongoing improvement efforts will correct the causes of these discrepancies; and (3) whether the Army's current method of assessing CBSX accuracy, referred to as the rate, is adequate. GAO noted that: (1) while the Army ensures that CBSX equipment balances accurately reflect the balances in its primary property book system twice a year, the Army does not identify the causes of adjustments made to CBSX balances to correct discrepancies; (2) GAO's analysis of the causes of CBSX adjustments has identified opportunities to correct process weaknesses and computer software problems that would reduce the number of adjustments and, consequently, increase the accuracy of CBSX throughout the year; (3) the Army does not have an effective process to ensure that equipment transactions from Army units are received by CBSX; (4) GAO's statistically projectable sample of adjustments made to CBSX to bring it into agreement with the Army's primary property book system, showed that over 40 percent of the adjustments were due to transactions not received by CBSX; (5) other reasons for adjustments included software errors and incorrectly posted transactions; (6) the lack of reconciliations performed between CBSX and unit property books, an outdated regulation, and incomplete training also were underlying factors that contributed to differences between CBSX and the primary property book system; (7) the Army's ongoing efforts to improve CBSX address some of the causes of adjustments such as those related to certain software errors and incorrectly posted transactions; (8) however, these efforts do not fully address property book transactions that were not received by CBSX, the largest cause of adjustments; (9) the Army also does not have an effective mechanism to measure CBSX performance; and (10) the Army-wide CBSX compatibility rate, the factor used to measure the extent to which CBSX and property book records agree, is overstated because it does not count all adjustments made to CBSX balances to correct discrepancies. |
The terrorist attacks of September 11, 2001, drastically changed the way the insurance industry viewed the risk of terrorism. Before September 11, 2001, insurers generally did not exclude or separately charge for coverage of terrorism risks for commercial property and casualty policies. After September 11, 2001, however, insurers and reinsurers started excluding the coverage because they determined that the risk of loss from a catastrophic terrorist event was unacceptably high. Insurers charge policyholders premiums to cover the expected losses an insurer may pay on claims, as well as expenses for providing insurance, such as administrative costs, and for the cost of capital to cover unexpectedly high losses and otherwise support the solvency of the insurance company. Insurance companies need to be able to predict with some reliability the frequency and severity of insured losses to establish their exposure—that is, level of risk—and price their premiums accordingly. As we have reported previously, measuring and predicting losses associated with terrorism risks can be particularly challenging for reasons including lack of experience with similar attacks, difficulty in predicting terrorists’ intentions, and the potentially catastrophic losses that could result. Reinsurers follow an approach similar to that of insurers for pricing risk exposures and charging premiums based on that risk and, therefore, face similar challenges in pricing terrorism risks. One way national terrorism risk insurance programs address these challenges and encourage insurers to continue offering terrorism risk insurance is by sharing some of the risks through an insurance pool or reinsurance arrangements. An insurance pool is an organization of insurers or reinsurers through which members underwrite particular types of risk, such as terrorism risk, with premiums, losses, and expenses shared in an agreed manner. Pools also may purchase reinsurance to further offset their risk. Reinsurers typically assume part of the risk and part of the premiums originally taken by the insurer or pool. In some programs, which are market-based, the insurance pool, rather than directly offering reinsurance to its members, allows members to jointly purchase reinsurance. This approach allows the members to collectively purchase reinsurance at more favorable terms than if the individual insurers were purchasing reinsurance directly from the reinsurer. Through insurance pools and reinsurance arrangements, the potential risks and costs of claims related to terrorism are spread across multiple participants. Other ways these programs address the challenges of covering potential terrorism losses are through cost-sharing features and funding methods, some of which are generally used in the insurance industry, including the following: Deductibles. In general, a deductible is the amount of losses paid by the policyholder (whether the insured of a direct insurer, or an insurer that is covered under a policy of reinsurance) before the insurer or reinsurer begins to pay any of the remaining loss. It may be a specified dollar amount or a percentage of claim amounts. Coshares. A coshare is a set percentage of loss that the policyholder must continue to cover even after the deductible has been met. In the case of national terrorism risk insurance programs, it may be the insurers participating in the programs that pay the deductibles and coshares before coverage from other participants, such as reinsurers or the government, is triggered. Government backstop. Many foreign governments offer financial support, sometimes referred to as a “government backstop,” to their national terrorism risk insurance programs or individual insurers to cover a certain threshold of claims. Combination funding. Programs also may use a combination of funding methods. For example, programs may charge a premium to participating insurers in exchange for some of the risk for paying potential losses. Pre-event funding. Charging premiums for coverage is a form of pre-event funding because payment is received before an event occurs. Post-event funding. Another approach to financing insurance coverage is through post-event funding, which involves collecting reimbursement for actual losses and associated expenses after an event has occurred. The events covered by terrorism risk insurance programs vary depending on perceived risks in different countries. Some countries that have experienced domestic turmoil have terrorism risk insurance programs that include coverage for public disorder and riots along with terrorism, whereas other programs cover only large-scale terrorist events. In addition, terrorism risk insurance programs differ on coverage for the types of weapons used in terrorist attacks. Because the losses from a terrorist attack using an unconventional weapon are particularly difficult to predict and price, some terrorism risk insurance programs only provide coverage for terrorist attacks using conventional weapons. Unconventional weapons generally include nuclear, biological, chemical, or radiological (NBCR) weapons, as well as cyberterrorism. Cyberterrorism is a growing area of concern in terms of terrorism risk insurance programs because our infrastructure, such as the control systems for public utilities, are increasingly interconnected, and a cyberterrorism event could cause minor to severe business disruption and physical damage to property. Many countries do not have national terrorism risk insurance programs for various reasons. In 2005, OECD reviewed national terrorism risk insurance programs and reported several examples of countries that have not implemented such programs. For example, OECD found that in Greek and Scandinavian markets, insurers had not excluded terrorism coverage, making government involvement to support terrorism risk insurance unnecessary. OECD also reported in 2005 that the insurance industries in Italy and Japan had proposed national terrorism risk insurance programs after 2001. However, according to the report, these proposals were not implemented due to lack of political will in Italy and only limited support from corporate customers as well as no strong public demand for terrorism risk insurance in Japan. In addition, in several countries where national terrorism risk insurance programs were developed and administered by the local insurance industry, such as Switzerland, governments did not want to interfere in local markets. In countries where no national terrorism risk insurance program exists, such as Canada and Mexico, individual organizations may still obtain coverage for terrorism risk from global insurers and reinsurers. Congress enacted TRIA in 2002 in response to widespread uncertainty in the terrorism risk insurance market. TRIA requires that insurers make terrorism coverage available to commercial property and casualty insurance policyholders or those seeking to obtain such a policy. TRIA was enacted as a temporary program that would terminate at the end of 2005 to allow for a transitional period for the private markets to stabilize and build capacity to absorb any future losses related to terrorism. However, the program has been extended and modified three times. Through its most recent reauthorization in 2015, Congress extended the program to 2020. TRIA is implemented by the U.S. Department of the Treasury (Treasury) as the Terrorism Risk Insurance Program. Under this program, losses are shared between the U.S. government and insurance companies, with Treasury reimbursing insurers for a share of losses associated with certain certified acts of foreign or domestic terrorism (see fig. 1). The 2015 reauthorization of TRIA includes changes to the program that gradually shift a greater share of losses from the federal government to insurance companies; this share is to increase annually for 5 years. In order for government coverage to be available, aggregate industry insured losses from certified acts must exceed a certain amount, known as a “program trigger.” For calendar year 2016, this amount was $120 million. If insured losses for a certified act exceed the program trigger, an individual insurer that experiences losses in excess of a deductible (20 percent of its previous year’s direct earned premiums in TRIA-eligible lines) may be eligible for reimbursement under the program. After the insurer has satisfied its deductible, the federal government would reimburse the insurer for a certain percentage of its losses (84 percent for calendar year 2016) above the deductible, and the insurer would be responsible for the remaining portion (16 percent for calendar year 2016), with a gradual decrease in the federal government share until it reaches 80 percent, in 2020. Annual coverage for losses is limited, and aggregate industry insured losses in excess of $100 billion are not covered by private insurers or the federal government. The program also includes a provision for mandatory recoupment of the federal share of losses in some instances. Under this provision, when insurers’ uncompensated insured losses are less than a certain amount (up to $31.5 billion for 2016), Treasury must impose policyholder premium surcharges on commercial property and casualty insurance policies until total industry payments reach 140 percent of any mandatory recoupment amount. When the amount of federal assistance exceeds any mandatory recoupment amount, TRIA allows for discretionary recoupment. Specifically, Treasury may recoup additional amounts based on the ultimate cost to taxpayers after mandatory recoupment, the economic conditions in the marketplace, the affordability of commercial insurance for small and medium-sized businesses, and any other factors Treasury considered appropriate. The structures of the 16 terrorism risk insurance programs we reviewed involve governments and private insurers in providing terrorism risk coverage and program administration in different ways. Seven of the 16 programs we reviewed have a multilayered structure in which insurers and governments provide coverage for terrorism risk, and in some cases the government administers the programs. In 2 other programs, government entities provide all coverage of terrorism risk and administer the programs. For another 6 programs, the governments do not have a primary financial role. The U.S. program shares both similarities and differences with the 7 multilayered programs. Of the 16 terrorism insurance programs we reviewed, 7 are structured with multiple layers of coverage and a government backstop: the programs in Australia, Belgium, Denmark, France, the Netherlands, Germany, and the United Kingdom. With the exception of the program in the United Kingdom, which was established in 1993, these programs were established after the September 11, 2001, terrorist attacks to encourage private sector involvement in the market for terrorism risk insurance. They generally also provide coverage for catastrophic terrorist events such as, NBCR attacks (see app. II for more information). For example, the national terrorism risk insurance program in Denmark, which was introduced in 2010, focuses only on NBCR attacks in response to a market failure for reinsurance of NBCR risks in Denmark. Although certain program features may differ, the multilayered program structure in the countries in this category were composed of two to six layers of coverage provided by insurance companies, program reserves, reinsurers, and governments (fig. 2): Insurance industry deductible. Insurance companies cover the first layer of losses up to a specified amount. Program reserves. Programs collect premiums from insurers and may use some of the pooled funds, or reserves, to pay for a layer of losses. Reinsurance. Programs often use collected premiums to purchase reinsurance from the international market to cover another layer of losses. Government backstop. Governments are liable for a final layer of losses, possibly subject to a maximum coverage cap, if other participants have met their payment requirements and excess claims remain. Each participant generally (insurance industry, program reserves, private reinsurance, and the government) pays for a certain share of claims throughout the multilayered process. If claim amounts exceed a layer’s designated payment limit, the next layer of participants becomes liable for excess costs. As we discuss in greater detail later in this report, multiple layers of coverage spreads risk among program participants. Under a multilayered structure, insurance companies are generally responsible for the first layer of insurance protection—the insurance industry deductible—and may be required to cover a certain amount of losses before other participants provide the additional layers of insurance protection. Depending on the program, the insurance industry deductible amounts may be calculated per individual insurer or as an industry aggregate amount. For example, individual insurance companies that participate in the Australian terrorism risk insurance program are required to pay deductibles ranging from $65,000 (A$100,000) to a maximum of $6.5 million (A$10 million). The individual insurance industry deductibles are charged up to a maximum industry aggregate of $65.1 million (A$100 million) per event. In the United Kingdom’s program, the industry-wide deductible limits are $141 million (£100 million) per event and $282 million (£200 million) for the annual aggregate. The middle layers of coverage are typically composed of program reserves, private reinsurance, or a combination of the two. Generally, insurance companies that participate in these programs pay a premium to a pool, which may be used to purchase reinsurance. Programs may pay for losses above the industry deductible using some of these program reserves, private sector reinsurance, or both. For example, Australia’s program has middle layers that include a combination of program reserves and private reinsurance. The program reserves are used to pay a deductible before the private sector reinsurance, then reinsurance covers a layer of losses, and any remaining program reserves cover an additional layer of losses before the government backstop would be triggered. Similarly, the middle layers of the United Kingdom’s program pay for losses using reserve funds and private reinsurance. All seven of the multilayered programs have purchased private reinsurance. In programs in Belgium, France, Germany, and the Netherlands, the middle layers primarily rely on privately purchased reinsurance. Reinsurance in these instances shifts risk from the program to the private reinsurance market. According to a few program officials and industry representatives, reinsurance also provides a layer of coverage before the government backstop through the layer of coverage it provides. The final layer of coverage in the generalized multilayered structure is the government backstop, which serves as financial assistance when other layers have been exhausted. The amount of the government backstop for national terrorism risk insurance programs varies among these seven programs. For example, Denmark’s government backstop is about $2 billion (15 billion Danish Krone), whereas the Netherlands’ government backstop is about $61 million (€50 million). According to an OECD report, a maximum cap on government support for terrorism risk insurance programs limits the government’s financial responsibility if a catastrophic terrorist event occurs. However, the report indicated that it may also leave policyholders with uncompensated losses in the case of a catastrophic terrorist event, and it could potentially lead to the government paying for coverage through other means, such as disaster assistance grants or loans. Of the seven multilayered programs we reviewed, only France has an unlimited backstop that is not repaid. Although the United Kingdom’s program has an unlimited line of credit from the government, the program is required to pay back the government for any funds that it borrows through future premiums from participating insurers. In the event the government backstop is used in Australia and Denmark, the programs can recoup government losses after an event occurs by raising program premiums or levying taxes on policyholders, if needed. Among programs with a multilayered structure, governments have varying levels of administrative involvement. For example, a government agency administers Denmark’s program. In Australia, a government corporation administers the country’s program and reports to a Minister, which is currently the Assistant Treasurer. The program receives direction from the Minister for setting premiums and the Minister appoints board members. In contrast, private organizations administer the programs in Belgium, Germany, France, the Netherlands, and the United Kingdom. These organizations collect premiums from and provide coverage to member insurers with generally limited administrative involvement from their governments. However, in most of these countries, the government established legislation or coordinated with private industry to establish the program. For example, Belgium’s program was established by law and is administered by a nonprofit association of insurance companies. The French program was started as a public-private partnership between the French insurance industry and the government. The program consists of two organizations: a private association of insurers that administers the pool and a government-owned insurance company that provides the unlimited government backstop. Two terrorism risk insurance programs we reviewed, those in Spain and Israel, are structured so that government entities provide all the coverage for terrorism risk and also provide program administration. In these cases, government entities, not private insurance companies, establish premiums, manage claims, and provide all of the coverage for terrorism risk. The programs in both countries were established prior to the September 11, 2001, terrorist attacks in response to civil strife or a history of political violence. While the multilayered programs previously discussed are typically structured to cover catastrophic losses, the programs in Spain and Israel are structured to cover events that could result in both larger and smaller amounts of losses, including NBCR events. In Spain, the organization that administers the program is a government entity. This organization collects funds from insurers that issue base policies for ordinary risks, maintains a reserve, and manages claims from policyholders. Insurers issue policies that include two kinds of coverage, ordinary risks and extraordinary risks, which includes terrorism and natural catastrophes. The insurer collects the premiums for both types of coverage and transfers the extraordinary risk premiums to the program on a monthly basis. Insurers manage premiums and claims related to ordinary risks. The program holds the extraordinary risk premiums in a reserve account from which it pays claims. The program maintains its own reserve funds and assets, which are held as dedicated funds to pay insurance claims and are independent from the Spanish government’s public budget. Insurers do not pay a deductible in the event of a terrorist attack, but covered policyholders may have to pay a deductible. Reinsurers do not provide a layer of coverage in the program. The government provides the only layers of coverage for terrorism risk through the program reserve and an unlimited government backstop, if needed. In Israel, a government agency administers payment of claims for terrorist events. Israel established a compensation fund that is administered by a government agency, the Ministry of Finance, through the Israel Tax Authority. After a terrorist event, property owners file claims and government-provided appraisers assess the property damage to establish compensation amounts. The government is responsible for all of the financial risk of this program and provides unlimited financial assistance. Financial compensation for direct damages to commercial property is unlimited. Unlike other programs we reviewed, the program is funded through property and other taxes rather than premiums. In some programs we reviewed the insurance industry provides all the coverage for terrorism risk, and the government provides no financial backstop to the program. In most of these programs, the government also has little administrative role. Programs in this category include those in Austria, Bahrain, India, Russia, South Africa, and Switzerland. Most of these programs were established following the September 11, 2001, terrorist attacks and cover property damage but exclude coverage of NBCR events. However, the programs in Bahrain and South Africa were established earlier and, like the programs in Spain and Israel, which were also established prior to September 11, 2001, they may include coverage of damage from events, such as war or riots. Some of these programs have layers of coverage from insurers, program reserves, and reinsurance, while others are primarily reinsurance arrangements where program members collectively purchase private reinsurance for sharing losses from a terrorist attack. For example, in India insurers pay premiums to be members of the program and are responsible for a deductible amount to cover the first layer of losses. The program collects premiums, manages program reserves, and purchases reinsurance. Similarly, the program in South Africa also includes layers of coverage from insurers, program reserves, and reinsurance. In these programs the government provides no financial backstop, but government-owned organizations administer both programs, and in India the government approves the premium rates proposed by the program administrator. In other programs, such as those in Austria, Bahrain, Russia, and Switzerland, insurance companies have established reinsurance arrangements with each other to cover losses, and their governments have no financial or administrative roles. For example, the Austrian program is run by an insurance association. Member insurance companies maintain individual reserves to cover their program deductibles, and they collectively purchase reinsurance above the deductible amount. Similarly, the Russian program is run by an association of insurers and uses a reinsurance arrangement in which member insurance companies each pay a share to purchase reinsurance coverage. The governments provide neither financial nor administrative support to these programs. Like many of the foreign programs we reviewed, the structure of the U.S. terrorism risk insurance program includes multiple layers of coverage, with insurers and the government providing layers of coverage. If a sufficiently costly attack were to occur, insurers would pay losses to policyholders, and a portion of the losses paid above the insurer’s deductible would be reimbursable by the government. Similar to other programs, the final layer of the U.S. program is a government backstop, although as noted later, backstop payments are made in conjunction with the coshares of participating insurers. Like the programs in Australia, Belgium, Denmark, Germany, and the Netherlands, the U.S. program also has a cap on the government backstop. The program caps the combined liability for insurers and the government at $100 billion per year. However, the structure of the U.S. program differs from those of other multilayered programs—for example, those in Australia and the United Kingdom—because it does not have middle layers of program reserves from which to pay claims or purchase reinsurance. Instead, the next layer of coverage after insurer deductibles in the U.S. program consists of coshared payments between the government and insurers. Although reinsurance is not a formal layer of coverage in the U.S. program, individual insurers may purchase reinsurance to help cover the cost of their deductibles or coshares. Another difference is that the U.S program does not collect up-front premiums from insurers. Instead, the government layer of coverage is funded through a recoupment process after an event occurs by levying surcharges on all commercially insured policyholders after a terrorist event. Unlike the programs in Australia, Denmark, and the United Kingdom, which have funding mechanisms both before and after an event has occurred, recoupment after an event occurs is the primary funding mechanism for the government layer in the U.S. program. For the six programs we reviewed in-depth, the private sector and program reserves would cover initial losses and governments are often responsible for a potentially large share of losses in more extreme events. In addition, most of these programs fund losses and costs up front, although the United States, in contrast, uses a post-event funding mechanism for the government layer. Table 1 shows the six programs and some of their attributes. Terrorism risk insurance programs with government participation and in countries with larger economies generally provided more coverage for losses. Specifically, among the programs we selected, those with a government backstop (the programs in Australia, the United Kingdom, the United States, and Spain) provided more coverage for losses than programs without government backstops, as shown in figure 3. For example, the Indian government does not financially back its terrorism risk insurance program, and the coverage provided for losses is small compared to programs with government backstops. Additionally, private sector coverage is generally larger for those programs in countries with larger economies, as measured by GDP. For example, the GDP in the United States is the largest compared to that of the other five countries we reviewed, and the coverage provided by the private sector in the United States is also larger. In the event of a terrorist attack, insurers in the United States could pay more than the total coverage provided by the other five countries’ national programs. Programs in India and Austria that have no government backstop provide less coverage for losses than the other four programs. For example, India’s program limits the maximum claim payout per location to $584,000 (INR 10,000 million), and Austria limits the total annual payout to $240 million (€200 million). A program representative from Austria told us that market demand also affected the program’s coverage amount and that although more reinsurance may be available to increase the coverage, the current program size meets the demand in the market. Other secondary factors could be related to the amount of program coverage but to a lesser extent than government participation and economy size. Some of these factors include the range of losses covered and whether the program includes mandatory or voluntary participation. For example, the program in Spain covers all catastrophic events, including flooding and earthquakes, and as a result the amount of coverage provided under Spain’s program is relatively large for its GDP. In contrast, all the other selected programs cover only terrorism risk. In addition, insurers participating in programs where coverage is mandatory have a wider base of policyholders from which to collect premiums compared to programs where some policyholders may not purchase the insurance, which can affect the amount of program coverage. For example, in Australia, terrorism risk coverage is mandatory, and premiums are collected from all policyholders. Terrorism coverage is voluntary for policyholders in the United Kingdom, so premiums collected from member insurers by the program originate from a smaller, less diversified group of policyholders. However, we found that government participation and economy size may have a greater relationship to the amount of program coverage than these secondary factors because at least one voluntary program had larger coverage than a mandatory program. For example, the United Kingdom’s program is voluntary, and its program coverage is much larger than that of Australia’s mandatory program. The United Kingdom’s GDP is about twice that of Australia, and the United Kingdom’s program before its government-backed line of credit takes effect provides more coverage than Australia’s entire terrorism risk insurance program, including its government backstop. In the six programs we selected for in-depth review, the risk of losses is spread across different types of participants (see fig. 4). The participants involved and their share of losses vary across programs, but according to program officials, private sector participants and program reserves likely would cover commercial and property losses from most conventional terrorist events. As we discuss later, four foreign terrorism risk programs were able to cover the losses resulting from recent terrorist attacks without using government funds. We found that depending on the type of terrorism risk, insurance program losses were shared with different numbers and types of participants. Multilayer programs with government role. The programs with multiple layers and government backstops, such as those in the United Kingdom and Australia, spread the risk of losses among the most participants. In these programs, primary insurers, private reinsurers, program reserves, and government funding could be used to pay for losses. Under the U.S. program, the government, insurers, and policyholders share in the risk of losses. Programs with no financial government role. Although the governments would not share the losses under the programs in Austria and India, these programs spread losses among different private sector participants, with both primary insurers that are members of the programs and private reinsurers sharing in the losses. Program in which government retains all risk. Spain’s program does not spread the risk among private sector and government participants, but the risk is diversified among policyholders. The risk is diversified in that it covers individual and business policyholders against both natural catastrophe and terrorism risks. The catastrophe coverage protects the same property or persons to the same level as risks covered in the underlying base policy. The program does not purchase private reinsurance, although it has the authority to do so, and all losses would be paid from the program’s reserves or the government backstop, if needed. Program reserves and reinsurance play an important role in loss-sharing in most of the programs we reviewed. Of the six programs, we reviewed four—Australia, India, Spain, and the United Kingdom—that have reserves. The program reserves in India and the United Kingdom (which have private program reserves) are larger and thus have a more significant role in loss-sharing than the reserves of the Australian program (which has a government program reserve). In addition, four programs (Australia, Austria, India, and the United Kingdom) purchase private reinsurance. Reinsurance also is a significant part of some of the programs, as shown previously in table 1. For example, the reinsurance in India’s program represents 36 percent of its total coverage. According to program officials and insurance industry stakeholders, reinsurance increases program capacity, spreads risk among other participants, and helps protect the government from losses. For example, officials of the United Kingdom’s program said they purchased reinsurance to privatize some of their insurance risk. Insurance brokers said that programs purchase reinsurance to increase the total pool capacity. Program officials also said that purchasing reinsurance can protect the governments involved by adding another layer of coverage before the government backstop. For example, Australian program officials said purchasing reinsurance expanded the capacity of the program and provided the government additional protection from exposure to terrorism risk. An Australian Treasury report also noted that purchasing reinsurance through the pool was a cost-effective way to access international reinsurance markets for terrorism risk. In contrast, as previously mentioned, reinsurance is not part of the structure of the U.S. program, but insurers may individually purchase reinsurance to help cover their deductibles and coshares. When the government has a financial role, it is generally responsible for a potentially large proportion of losses and would be involved only in the event of a catastrophic terrorist attack. In the event of a large terrorist attack that exhausted coverage from the other participants, the governments of Australia, Spain, the United Kingdom, and the United States would be responsible for a potentially significant proportion of losses. For example, an NBCR terrorist attack could result in losses that exhausted program resources. Programs that explicitly cover NBCR terrorist attacks—which include to different extents the programs in the United Kingdom, Spain, and Australia—could have a greater chance of losses reaching the level where the government backstop begins paying losses than programs that do not explicitly cover this risk. Although both Spain and the United Kingdom have unlimited government backstops, in Spain the government would be responsible for the potentially greater proportion of losses because the government retains all the financial risk for terrorism. The financial losses would be paid from the government program reserves, which are independent from the public budget, and an unlimited government backstop if the program were ever to exhaust its reserves. In the United Kingdom, although the government backstop is also unlimited, the program has private reserves and has a line of credit that is expected to be repaid. In Australia and the United States, the government also is responsible for a potentially large share of losses, but the government’s exposure is capped under these programs. Although the U.S. government’s exposure is large, it is not unlimited as in the programs in Spain and the United Kingdom. The U.S. government’s potential share of property and casualty losses from a catastrophic terrorist attack is nearly 10 times that of the Australian government’s exposure, including both the government program reserve and the government backstop. The government’s exposure in the Australian program is smaller than in the other programs because the government backstop is limited. Generally, changes to the U.S. program and three other selected programs over time have decreased governments’ potential share of losses and increased the private sector’s potential share. For example, the programs in Australia and the United Kingdom did not purchase reinsurance when they were first established in 2003 and 1993, respectively. Both programs have since increased capacity by purchasing private reinsurance, which increases the event size at which a government backstop—or in the case of the United Kingdom program, a line of credit—and a portion of reserves (private or government) would be necessary. Since 2009, the Australian program has purchased private reinsurance through an international insurance broker with the annual premium income it receives from its members. Around 60 reinsurers participate in providing the reinsurance layer of coverage for the program. The Australian program officials said that purchasing reinsurance not only provides further protection to the government, it helps bring reinsurers back into the market as many stopped offering coverage for terrorism risk following September 11, 2001. In 2015, the United Kingdom’s program purchased reinsurance for the first time to increase the capacity of its pool and entered into an agreement with various reinsurers for $2.5 billion (£1.8 billion) in additional reinsurance capacity. Representatives of global reinsurers told us that they preferred providing reinsurance to national programs rather than covering terrorism risk in individual insurers’ portfolios or reinsuring individual properties. They said that providing reinsurance to national programs helps reinsurers better diversify their risk geographically and limits their potential losses. In addition, one reinsurer said that program officials often have expertise in terrorism risk insurance and high-quality data on the program’s exposure. Changes to the Austrian and U.S. programs also have involved shifting more of the risk of losses to primary insurers by increasing parameters such as deductibles or coshare amounts. For example, according to the Austrian representative, the Austrian program increased its insurers’ deductible from $60 million to $90 million (€50 million to €75 million) in January 2013, according to an insurance company representative of the program. The representative told us this action decreased the amount of reinsurance the pool needed to purchase, which decreased the cost to the pool members but required the insurers to hold larger reserve accounts at their institutions. The members have until 2018 to increase their reserve accounts. In the United States, the 2015 reauthorization of the program included changes to several provisions—including increasing insurers’ coshare amount, the trigger for government involvement, and the mandatory recoupment amount—all of which decrease the government’s share of losses and increase the private sector’s share. While four of the six programs we reviewed have made changes to increase the private sector’s share of losses, none of the six have made changes to expand their coverage to include cyberterrorism. Cyberterrorism is an increasing concern for businesses and governments, according to recent United Kingdom and U.S. government reports. For example, according to a United Kingdom government report, physical damage due to cyberattacks is a growing concern, both in terms of severity and frequency, due to the increasing interconnectedness between cyberspace and the physical world. In addition, we have previously found that pervasive and sustained cyberattacks against the United States could have potentially devastating impacts. Although insurance coverage for cyber risk is an emerging market, our review did not identify any terrorism risk insurance programs that explicitly covered cyberattacks. However, officials we spoke with from programs in Australia, the United Kingdom, and Spain suggested that covering losses from cyberattacks should be studied. In the United States, TRIA does not specifically address losses from cyberterrorism attacks. In a May 2014 report on terrorism risk insurance, we recommended that Treasury gather information from the insurance industry related to how cyberterrorism is defined and used in policies and clarify whether losses that may result from cyberterrorism are covered under TRIA. In its comments on that report Treasury stated that TRIA does not preclude federal payments for a cyberterrorism event. Treasury also stated that while the agency planned to continue to monitor this issue as it develops and collects applicable market data as necessary, an advance determination of when a cyber event is an act of terrorism was not needed. In 2015, Treasury officials told us that they would, as appropriate, consider addressing certain issues related to cyberterrorism risk under the terrorism risk insurance program in the context of other studies and rules required in the 2015 reauthorization of the program. All five of the foreign programs we reviewed in-depth—those in Australia, Austria, the United Kingdom, India, and Spain—collect funds up front from insurers or policyholders to cover their share of losses. Primary insurers pay a portion of premiums collected from policyholders to the national programs. These up-front funds may be used to pay for reinsurance, build reserves, or pay government backstop fees, depending on the program. The flow of collected funds for the programs in Australia and the United Kingdom is illustrated in figure 5. As discussed in the next section, the premium income also covers administration costs of the programs. Officials from Australia and the United Kingdom stated that they use modeling of event sizes and scenarios to understand the respective program’s potential losses and to help determine the appropriate level of premiums and reserves. In determining premium amounts, they also consider whether enough funds are collected to cover costs and pay government fees. In addition, both programs receive income from investments of reserve funds, which can be used to pay for administrative costs. In Austria, the program does not collect premiums from insurers, but insurers collectively purchase reinsurance using premiums collected from policyholders, which is considered up-front funding. The programs in the United Kingdom and Australia charge insurance companies premiums that are based on perceived risk for terrorism in different locations. In both programs, insurers pay higher premiums to the programs for reinsurance coverage on properties in higher-risk locations, such as central business districts in large cities where a single event could produce very large losses. In the United Kingdom, the program determines premiums as a percentage of coverage and by geographic location. Specifically, an insurer pays a higher premium to the pool for reinsuring a property located in the central business district of London compared to a property located in the central business districts in other United Kingdom cities or in rural areas. The premium charged by the Australian program to member insurers is determined by the Australian Treasury and is set by geographical location based on population density. The Australian program bases member insurer premiums on a percentage of the collected premiums from policyholders. As a result, Australian program officials stated that their annual premium stream fluctuates with terrorism insurance rates, which complicates program management. In contrast to Australia and the United Kingdom, the up-front funding used in the programs in Spain, India, and Austria is not based on perceived risk for terrorism in different locations. In these programs, the up-front funds are collected using flat rates based on coverage amount or market share, which do not vary by location. Spain’s program charges policyholders a surcharge, which is based on the coverage amount in the base insurance policy. The surcharge—which varies for residential, commercial, and personal lines—covers all extraordinary risks, including terrorism and natural catastrophes. Insurers in Spain pay the program the surcharge that they collect from policyholders. In India, the premium is based on the coverage amount. According to program officials, the underwriting committee of the program proposes premiums and reviews them every 3 or 4 years, and the government regulator approves them. Up-front funds related to Austria’s program are based on market share. Participating insurers are responsible for a proportionate share of the reinsurance premium based on each company’s share of the terrorism risk insurance market. In Australia and the United Kingdom, both programs pay a fee to their respective treasuries for the promise of payment or a line of credit if the pools’ funds are depleted. From its inception to 2013, Australia’s program did not make any payments to the government so that its pool could build its reserves. In 2012, the Australian Treasury recommended that the pool pay a dividend to the government over a 4-year period because the reserves had grown to an adequate level, according to an Australian Treasury official. In 2014, the pool paid the Australian Treasury $97.7 million (A$150 million) for the government backstop and expects to make annual payments to the Treasury until 2017-2018. Program officials said the fee amount was determined through an actuarial process. The United Kingdom’s program was established in 1993, it was expected that the program would be required to pay a percentage of its premiums to the United Kingdom’s Treasury once the program had built its reserves to a certain amount. In 2015, this percentage was increased to 50 percent of annual gross premiums plus 25 percent of any annual surplus, according to the agreement between the program and the Treasury. A United Kingdom official said the fee is for the guaranteed loan agreement. According to a United Kingdom Treasury official, the increase was intended to reflect the potential cost of capital to the government for backing this liability, and the current reserves were at a sufficient level to cover losses. The revenue that the programs in the United Kingdom and Australia provide to their respective national treasuries is placed in general revenue accounts. The other four programs (Austria, India, Spain, and the United States) do not pay fees to their treasuries for their coverage. In contrast, the U.S. program generally uses a post-event funding mechanism. In the United States, primary insurers collect up-front premiums from policyholders, but the policies explicitly state that these premiums do not cover the government’s share of the losses. The government would collect premiums from primary insurers through surcharges paid by policyholders after an event occurs to cover its share of losses, subject to the recoupment provisions discussed earlier. Academic literature we reviewed and program and industry representatives we interviewed indicated a variety of views on the benefits and challenges of pre-event funding for catastrophic events. One of the benefits cited for collecting funds before an event occurs is that it helps limit taxpayer exposure as the funds accumulate over time because accumulating funds from the private sector keeps insurers involved in sharing some of the risk and provides a level of certainty about their ability to cover potential losses. One program representative also noted that pre-event funding facilitates a program’s ability to purchase private reinsurance. One study also noted that pre-event funding helps keep premiums affordable. For example, an independent or government organization administering the program has lower capital costs compared to private reinsurers, which would typically incorporate these costs into their premiums. Further, programs can use pre-event funding mechanisms to offer premium discounts, which also can help keep premiums affordable. For example, under the United Kingdom’s program, businesses can receive a 2.5 percent premium discount for instituting risk mitigation procedures, such as adding concrete barriers around buildings. However, an academic study, an insurance industry representative, and our previous reports on terrorism insurance acknowledged some challenges associated with pre-event funding for terrorism risk. For example, because of the difficulty in estimating the frequency and severity of terrorist events, the appropriate amount to collect before an event occurs and the appropriate amount to keep in reserve can be challenging to determine. It could take many years to accumulate sufficient funds to cover potential losses, and once funds are built up, there may be pressure to use them for other purposes. For example, as previously discussed, the governments in Australia and the United Kingdom recently determined that the programs in their countries had built up sufficient levels of reserves and increased the fee that the respective programs pay to their national treasuries for the coverage of a government backstop. Another challenge cited by an insurance industry representative, in an academic study, and in our 2014 report is developing adequate controls and monitoring over the management of the funds. For example, as discussed in more detail later, an administrative structure is needed to manage investments and accounting. Finally, our 2008 report on terrorism insurance and an industry association representative noted that pre-event funding could divert financial resources away from other purposes, such as insurers purchasing reinsurance. The programs in Australia, Spain, India, and the United Kingdom have staff who carry out their responsibilities, such as collecting the premiums and fees, purchasing private reinsurance, and paying claims to either pool members or policyholders. The costs for carrying out these responsibilities are generally a small percentage of the programs’ overall income but are higher than those of the U.S. program. The Indian and Austrian programs have minimal administrative costs. For example, the Austrian program does not incur any administrative costs because it employs no staff for administration. The program obtains reinsurance through a major reinsurance broker, and primary insurers pay their portion of the reinsurance premium directly to the broker. The U.S. program also has minimal administrative costs. The program is administered by the Secretary of the Treasury, with the assistance of the Federal Insurance Office. With the exception of the programs in Austria and the United States, the administrative expenses of the other programs are generally paid with the premium income they collect. Table 2 provides an overview of administrative expenses and activities for the six selected terrorism risk insurance programs. According to Australian and United Kingdom Treasury officials, their respective treasuries incurred minor costs for administering the government backstops. Australian Treasury officials told us that the Treasury manages payroll, information technology, and property maintenance, the total costs of which are equal to a part-time staff member’s working hours, but that the program pays a fee for its services. A Treasury official from the United Kingdom told us that one staff person may spend only part of their time on tasks related to the program and estimated the costs to be about $9,900 (£7,000). Among the countries we selected for review, Spain, the United Kingdom, India, and Australia have experienced terrorist attacks with property and casualty financial losses within the scope of their respective terrorism risk programs since September 11, 2001. According to program officials, the programs in Spain, the United Kingdom, and India paid all the losses incurred without government financial assistance, and the losses from the January 2014 attack in Sydney, Australia, did not exceed the program’s deductibles for the primary insurers. In the United States, no act has been certified as an “act of terrorism” under TRIA. Spain. Spain experienced two terrorist attacks in the 2000s: a bombing on a Madrid commuter train in March 2004 (resulting in personal damages), and a bombing at an airport on December 2006 (resulting in material damages). According to Consorcio documentation, the program paid a total of about $152 million (€98 million) to cover the claims from both attacks. The program’s reserves were sufficient to cover all of the losses. According to Consorcio officials, the program did not request additional funds from the Spanish government to pay the claims. Officials of Spain’s program told us that no substantive changes were made to the program as a result of the two terrorist bombings. United Kingdom. On July 7, 2005, the city center of London experienced a series of bombings on its transport network. After the 2005 London bombings, according to program officials, the program received notice of claims of $25.2 million (£13.66 million) from the two member insurers. They added that the two insurers paid $9.1 million (£4.9 million) to policyholders, which was considered the deductible amount the two insurers were required to pay as program members. After the insurers’ losses exceeded the deductible, the two insurers requested financial assistance from the program for the remaining $16.2 million (£8.76 million), which the program paid with funds from its reserves, according to a program official. According to program officials, program reserves were sufficient to cover the rest of the members’ losses, and the program did not need government funds to pay the claims. India. The Indian program and the private reinsurers covered all losses from November 26, 2008, attacks in Mumbai, India. According to an official from India’s program, the total amount of financial losses from the 2008 Mumbai attacks was $321 million (INR 3,769 million), and the program was responsible for paying $128 million (INR 1,500 million). According to an official, the Indian pool and the commercial reinsurers were able to pay the claims without requesting additional funds from the government of India. In addition, the official told us that since the 2008 attacks, the coverage provided to insured participants increased over time because of the high demand for insurance. Australia. On January 15, 2015, the Australian Treasurer declared that the attack on the Lindt Café, Martin Place, Sydney, was a terrorist attack for the purpose of paying claims. According to the Australian program’s 2015 financial statement, the value of the claims submitted did not exceed the individual deductibles of program members. Moreover, claims were not expected to reach beyond the deductible; therefore, the program incurred no claims expenses. The Australian Treasury’s 2015 Triennial Review of the program was completed after the Lindt Café attack, but the Treasury did not recommend any structural changes to the program as a result of the attack. Among its recommendations, the Australian Treasury suggested that the current administrative structure of the pool be retained, that the program should continue to have the discretion to purchase reinsurance, and that a fee should be paid to the Treasury for the backstop. We provided a draft of this report for review and comment to the Treasury, including the Federal Insurance Office and National Association of Insurance Commissioners (NAIC). In addition, we provided relevant sections to the terrorism risk insurance programs of Australia, Austria, India, Spain, and the United Kingdom for their technical review. Treasury and the terrorism risk insurance programs of Australia, Spain, and the United Kingdom provided technical comments that we incorporated, as appropriate. We will send copies to Treasury, NAIC, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. For terrorism risk insurance programs in the United States and selected foreign countries, the objectives of our report were to (1) compare the organizational structures and the role of government and (2) examine the loss-sharing arrangement between the government and private sector and the methods by which the programs are funded. To address these objectives, we reviewed the Terrorism Risk Insurance Act of 2002 (TRIA), as amended, and the Terrorism Risk Insurance Program Reauthorization Act of 2015, as well as prior GAO reports on this topic. We also reviewed relevant documents from the Organisation for Economic Co-operation and Development (OECD) and its International E-Platform on Terrorism Risk Insurance (OECD E-Platform); relevant industry reports; and documents available on national terrorism risk insurance program websites, such as annual reports and program descriptions. To identify documents and reports, we relied in part on Internet and electronic database searches on national terrorism risk insurance programs. Further, we reviewed cited sources in the documents we reviewed to identify additional documents for review. We also relied on recommendations from officials and representatives we interviewed on national terrorism risk insurance programs. To identify documents and reports on the comparison of pre-event and post-event funded program financing, we relied on Internet and electronic database searches on this topic. In addition, we conducted a number of interviews. To collect information on TRIA, we interviewed officials from the Department of the Treasury (Treasury), the National Association of Insurance Commissioners, the Congressional Budget Office, and the Congressional Research Service. To collect information on the terrorism risk insurance market in countries with national terrorism risk insurance programs and those without, we interviewed representatives from the OECD, several global reinsurance and insurance firms, and industry participants, such as representatives from insurance trade associations, a rating agency, and insurance brokers. We selected among the largest reinsurance firms that have experience with private reinsurance agreements involving foreign terrorism risk insurance programs. We made the selection based on documentation from Treasury’s Federal Insurance Office and a large international broker that has participated in reinsurance agreements between private reinsurers and foreign terrorism risk insurance programs. To collect information on individual programs, we interviewed government and national terrorism risk insurance program officials from Australia, Austria, Spain, and the United Kingdom. More specifically, we interviewed officials from Australia’s Government Treasury and the Australian Reinsurance Pool Corporation (ARPC); Austria’s Österreichischer Versicherungspool zur Deckung von Terrorrisiken; Spain’s Consorcio De Compensacion De Seguros (Consorcio); and, the United Kingdom, HM Treasury, the Bank of England’s Prudential Regulation Authority, and the Pool Reinsurance Company Limited (Pool Re). In addition, we corresponded with officials from the Indian Market Terrorism Risk Insurance Pool to collect information on that program, and we corresponded with representatives of other national terrorism risk insurance programs to clarify program information collected from their websites or other sources. We did not conduct an independent legal analysis to verify the information provided about the laws, regulations, or policies of the foreign countries selected for this study. To address objective one, using information we collected from OECD and program documents and interviews, we identified countries with national terrorism risk insurance programs and reviewed available information on those programs. The U.S. terrorism risk insurance program is limited to commercial property and casualty insurance, and therefore we limited our review to programs that provide similar coverage. In addition to the U.S. program, we identified 15 national terrorism risk insurance programs with sufficient information available to be included in our review. We excluded from our review programs that did not provide commercial property and casualty insurance coverage, those with limited or no available information, or were undergoing restructuring. To develop generalized descriptions of the features of terrorism risk insurance programs, we analyzed documents on the 16 national terrorism risk insurance programs, including the U.S. program. Specifically, one GAO analyst independently reviewed the documents and categorized certain features of each country’s terrorism risk insurance program, including the extent of government involvement; program funding; and coverage options, such as nuclear, biological, radiological, or chemical weapons coverage. These categorizations were verified by a second analyst, and any discrepancies were resolved by both analysts or a moderator. The analysts used a coding structure to track their findings. Using this information, we developed three categories of national terrorism risk insurance program: multilayered structures with government backstop, structures in which government provides all terrorism risk coverage, and structures in which insurers and reinsurers provide all terrorism risk coverage. Then we grouped the national programs into one of the three categories and compared these categories to the program in the United States. To assess the data reliability of the OECD’s E- Platform, specifically the country profiles on the site, we followed up with OECD officials through e-mail correspondence on the timeliness of the profiles. OECD officials advised us of any updates since E-Platform was launched in 2014 and said that the information in the country profiles was provided by the national terrorism risk insurance programs. As available, we corroborated the information in the E-Platform’s country profiles with national terrorism risk insurance program documentation. We determined the data were sufficiently reliable for the purpose of categorizing and identifying the specific features of the programs. To address our second objective, we selected six programs for further review. We used information collected through the content analysis— including program documentation and interviews with government and program officials—and developed criteria to help ensure that we selected programs representing diverse characteristics. For example, our criteria included selecting countries representing different types of programs, such as at least one country with a national terrorism risk insurance program that did not include any government financial support as part of the insurance coverage, at least one country with a national terrorism risk insurance program that was not a member of OECD, and at least one country with a national terrorism risk insurance program that was developed prior to 2001. Using these characteristics, among others, we judgmentally selected the programs in Australia, Austria, India, Spain, and the United Kingdom. We also included the program in the United States in the review for this objective. We further assessed and compared the national terrorism risk insurance programs in Australia, Austria, India, Spain, the United Kingdom, and the United States. We reviewed the six programs in order to identify and assess the layers of insurance coverage and which participants (policyholders, insurers, reinsurers, and government) are responsible for each layer, how the programs are structured, how the financing of the programs was established, how the programs have been affected by actual claims and administrative activities and costs. In determining the insurance coverage and loss-sharing arrangements of the programs, we reviewed documentation from OECD and the individual programs. In addition, we analyzed the 2014 annual reports of ARPC, Consorcio, and Pool Re. The reports included a description of the programs, specifically the programs’ activities, and an audited financial statement of the programs, which provided information on programs’ revenue and costs. A financial statement on the Austrian terrorism risk insurance program was not available, so we relied on other documents describing the program and an interview with a program representative for this information. For India’s terrorism risk insurance program, we relied on an annual report of the organization that administers the program and written responses from a program representative. We shared relevant sections of the draft report with program officials in the five countries to confirm the accuracy of the information. In comparing the coverage of the six selected terrorism risk insurance programs, we considered the economic output of the different countries. In identifying the gross domestic product (GDP) for the six countries, we used GDP data from the World Bank. For all foreign currency amounts we present in the report, we converted them into 2014 U.S. dollars by applying an economic variable known as the purchasing power parity rate that we obtained from the World Bank. The purchasing power parity rate is the rate at which the currency of one country would have to be converted into that of another country to buy the same amount of goods and services in each country. For values that were from a year other than 2014, we converted the value into U.S. dollars using the purchasing power parity rate for that year, and then used U.S. GDP data to convert the value into constant 2014 U.S. dollars. To estimate coverage in the United States, we simulated costs to the government and private insurers for a $100 billion loss in 2016 with the top 20 insurers experiencing losses. To analyze the administrative costs of the five selected foreign terrorism risk insurance programs and how such costs were incorporated into program fees or premiums, we reviewed the programs’ audited financial statements, where available. Only the terrorism risk insurance programs of Australia and the United Kingdom had financial statements from 2010 to 2014 that we could review. In addition, we reviewed the Spanish program’s financial statement for 2014 for its administrative costs in 2013 and 2014. According to an official representing the Austrian program, the program does not have any administrative costs. India’s program does not maintain a financial statement on its terrorism risk insurance program with specific administrative expenses, so these data were not available to us. To assess the reliability of the available data on administrative costs, we reviewed the documentation on the data and assessed them for consistency and whether the financial statements were audited. The financial statements of the Australian and Spanish programs were reviewed and signed by their respective Supreme Audit Organizations. The financial statements for the United Kingdom’s program were signed by a public accounting firm. We determined the data were sufficiently reliable for the purpose of reporting on these programs’ administrative costs. Further, we reviewed annual reports and other program documentation and interviewed officials from the terrorism risk insurance programs of Australia, India, Spain, and the United Kingdom to identify terrorist attacks in their countries since 2001. Using interviews and correspondence with program officials, we identified the amounts paid on claims resulting from the terrorist attacks and the financial effect on the programs from the claim payments and unanticipated outcomes or program changes since 2001, if any, related to the claim payments. We also interviewed a representative of the Austrian terrorism risk insurance program to confirm whether the program had paid out any claims since its creation. We focused on claims and changes since 2001 because the attacks of September 11, 2001, led a number of countries to develop new terrorism risk insurance programs. We conducted this performance audit from January 2015 to April 2016, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The tables in this appendix list the terrorism insurance program features that we identified across 16 reviewed programs. The tables illustrate the variation in program features across the programs we reviewed. For some programs, including those in Bahrain, India, and Switzerland, we did not find information in our review to describe all program features. Multilayered terrorism risk insurance programs with government backstops have some similarities in structure, but features among programs may differ. Generally, these programs have layers of insurance coverage that include an insurance industry deductible, program reserves, reinsurance, and a government backstop. Table 3 illustrates some differences and similarities among eight multilayered programs with government backstops that we reviewed. In the national terrorism risk insurance programs in Spain and Israel, government entities provide all the terrorism risk coverage. Even though the programs are similar in this regard, other program features differ, as shown in table 4. In these programs, the government provides no financial backstop to the terrorism risk insurance program and in most cases—with the exception of India and South Africa—has no administrative role, as shown in table 5. In addition to the contact named above, Jill Naamane (Assistant Director); Nancy Eibeck (Analyst-in-Charge); Pamela Davidson; Raheem Hanifa; Mark Ireland; Karen Jarzynka-Hernandez; DuEwa Kamara; John Karikari; Colleen Moffatt Kimer; Patricia Moye; Jennifer Schwartz; Jena Sinkfield; and Frank Todisco made key contributions to this report. | A number of countries have established national terrorism risk insurance programs to respond to market shortages for such insurance resulting from attacks either in their own or other countries. Many programs were created following the events of September 11, 2001, but some existed earlier. In 2002, the Terrorism Risk Insurance Act established a program to ensure the availability of terrorism risk insurance in the United States. The Terrorism Risk Insurance Program Reauthorization Act of 2015 includes a provision for GAO to review how other countries have structured and funded their terrorism risk insurance programs. This report compares the structures of and the role of government in selected foreign terrorism insurance programs and examines the loss-sharing arrangements between the government and private sector. Of the 16 programs identified through a literature review, GAO selected 6 representing a range of structures to examine in-depth—programs in Australia, Austria, India, Spain, the United Kingdom, and the United States. For the six programs, GAO reviewed program financial statements, annual reports, and documentation from the Organisation for Economic Co-operation and Development and interviewed officials from terrorism insurance programs, agencies, reinsurance companies, and trade associations. GAO makes no recommendations in this report. GAO provided a draft to Treasury and the five selected programs for their review and received technical comments, which were incorporated as appropriate. The structures of the 16 terrorism insurance programs GAO reviewed generally fell into three broad categories. Programs in the first category have a multilayered structure, with insurers, reinsurers (which offer insurance for insurers), and governments providing coverage. Several programs, including those in Australia and the United Kingdom (UK), use this approach. In the second category, which includes Spain and Israel, government entities provide all the coverage for terrorism risk, and insurers and reinsurers do not take on any risk. The third category includes programs, such as those in Austria and India, in which insurers and reinsurers are entirely responsible for providing coverage and the government has no financial role. In comparison, the U.S. program involves coverage from the government and insurers, but it differs from many programs as its program does not include the purchase of reinsurance. Among the six programs GAO reviewed in-depth, the loss-sharing arrangements among program participants vary, but program reserves and the private sector likely would be able to cover losses from most conventional terrorist events before public funds are needed, according to program officials. However, in the event of a very large terrorist attack, governments that have a role would potentially be responsible for a substantial proportion of losses. As shown in the figure, programs in which the government provides a layer of financial support have greater total amounts of coverage compared to those with only private sector participation. Additionally, private sector coverage is larger under programs in countries with larger economies, as measured by gross domestic product. In the event of a large terrorist attack, insurers in the United States could pay more than the total coverage provided by the other five countries' national programs. Most of the selected programs collect premiums up front to cover losses and program costs; the United States, in contrast, collects reimbursement for actual losses and associated expenses after an event occurs. Note: In the UK, the government share is an unlimited line of credit to the private program that is expected to be repaid. Spain's funds include program reserves for terrorism and other catastrophic events and an unlimited government backstop. The unlimited government shares are portrayed as matching the program funds, but the actual size could differ depending on the type and size of terrorist attack. Austria's program is entirely private. |
CMS has undertaken steps to educate beneficiaries about the Part D benefit using written documents, a toll-free help line, and the Medicare Web site. To explain the Part D benefit to beneficiaries, CMS had produced more than 70 written documents as of December 2005. Medicare & You—the beneficiary handbook—is the most widely available and was sent directly to beneficiaries in October 2005. Other written documents were targeted to specific groups of beneficiaries, such as dual-eligible beneficiaries and beneficiaries with Medicare Advantage or Medigap policies. Beneficiaries can obtain answers to questions about the Part D benefit by calling the 1-800-MEDICARE help line. This help line, which is administered by CMS, was established in March 1999, to answer beneficiaries’ questions about the Medicare program. As of December 2005, about 7,500 CSRs were handling calls on the help line, which operates 24 hours a day, 7 days a week, and is run by two CMS contractors. CMS provides CSRs with detailed scripts to use in answering the questions. Call center contractors write the scripts, and CMS checks them for accuracy and completeness. In addition, CMS’s Medicare Web site provides information about various aspects of the Medicare program. The Web site contains basic information about the Part D benefit, suggests factors for beneficiaries to consider when choosing plans and provides guidance on enrollment and plan selection. It also lists frequently asked questions, and allows users to view, print, or order publications. In addition, the site contains information on cost and coverage of individual plans. There is also a tool that allows beneficiaries to enroll directly in the plan they have chosen. Although the six sample documents we reviewed informed readers of enrollment steps and factors affecting coverage, they lacked clarity in two ways. First, about 40 percent of seniors read at or below the fifth-grade level, but the reading levels of the documents ranged from seventh grade to postcollege. As a result, these documents are challenging for many seniors. Even after adjusting the text for 26 multisyllabic words, such as Medicare, Medicare Advantage, and Social Security Administration, the estimated reading level ranged from seventh to twelfth grade, a reading level that would remain challenging for at least 40 percent of seniors. Second, on average, the six documents we reviewed did not comply with about half of the 60 commonly recognized guidelines for good communications. For example, although the documents included concise and descriptive headings, they used too much technical jargon and often did not define difficult terms such as formulary. The 11 beneficiaries and 5 advisers we tested reported frustration with the documents’ lack of clarity as they encountered difficulties in understanding and attempting to complete 18 specified tasks. For example, none of these beneficiaries and only 2 of the advisers were able to complete the task of computing their projected total out-of-pocket costs for a plan that provided Part D standard coverage. Only one of 18 specified tasks was completed by all beneficiaries and advisers. Even those who were able to complete a given task expressed confusion as they worked to comprehend the relevant text. Of the 500 calls we placed to CMS’s 1-800-MEDICARE help line regarding the Part D benefit, CSRs answered about 67 percent of the calls accurately and completely. Of the remainder, 18 percent of the calls received inaccurate responses, 8 percent of the responses were inappropriate given the question asked, and about 3 percent received incomplete responses. In addition, about 5 percent of our calls were not answered, primarily because of disconnections. The accuracy and completeness of CSR responses varied significantly across our five questions. (See fig. 1.) For example, while CSRs provided accurate and complete responses to calls about beneficiaries’ eligibility for financial assistance 90 percent of the time, the accuracy rate for calls concerning the drug plan that would cost the least for a beneficiary with specified prescription drug needs was 41 percent. CSRs inappropriately responded 35 percent of the time that this question could not be answered without personal identifying information—such as the beneficiary’s Medicare number or date of birth—even though the CSRs could have answered our question using CMS’s Web-based prescription drug plan finder tool. CSRs’ failure to read the correct script also contributed to inaccurate responses. The time GAO callers waited to speak with CSRs also varied, ranging from no wait time to over 55 minutes. For 75 percent of the calls—374 of the 500—the wait was less than 5 minutes. We found that the Part D benefit portion of the Medicare Web site can be difficult to use. In our evaluation of overall usability—the ease of finding needed information and performing various tasks—we found usability scores of 47 percent for seniors and 53 percent for younger adults, out of a possible 100 percent. While there is no widely accepted benchmark for usability, these scores indicate difficulties in using the site. For example, tools such as the drug plan finder were complicated to use, and forms that collect information on-line from users were difficult to correct if the user made an error. We also evaluated the usability of 137 detailed aspects of the Part D benefit portion of the site, including features of Web design and on-line tools, and found that 70 percent of these aspects could be expected to cause users confusion. For example, key functions of the prescription drug plan finder tool, such as the “continue” and “choose a drug plan” buttons, were often not visible on the page without scrolling down. In addition, the drug plan finder tool defaults—or is automatically reset—to generic drugs, which may complicate users’ search for drug plans covering brand name drugs. The material in this portion of the Web site is written at the 11th grade level, which can also present challenges to some users. Finally, in our evaluation of the ability of seven participants to collectively complete 34 user tests, we found that on average, participants were only able to proceed slightly more than half way though each test. When asked about their experiences with using the Web site, the seven participants, on average, indicated high levels of frustration and low levels of satisfaction. Within the past 6 months, millions of Medicare beneficiaries have been making important decisions about their prescription drug coverage and have needed access to information about the new Part D benefit to make appropriate choices. CMS faced a tremendous challenge in responding to this need and, within short time frames, developed a range of outreach and educational materials to inform beneficiaries and their advisers about the Part D benefit. To disseminate these materials, CMS largely added information to existing resources, including written documents, such as Medicare & You; the 1-800-MEDICARE help line; and the Medicare Web site. However, CMS has not ensured that its communications to beneficiaries and their advisers are provided in a manner that is consistently clear, complete, accurate, and usable. Although the initial enrollment period for the Part D benefit will end on May 15, 2006, CMS will continue to play a pivotal role in providing beneficiaries with information about the drug benefit in the future. The recommendations we have made would help CMS to ensure that beneficiaries and their advisers are prepared when deciding whether to enroll in the benefit, and if enrolling, which drug plan to choose. Mr. Chairman, this concludes my prepared remarks. I would be happy to respond to any questions that you or other Members of the subcommittee may have at this time. For further information regarding this statement, please contact Leslie G. Aronovitz at (312) 220-7600. Contact points for our Offices of Congressional Relations and Public Affairs may be found in the last page of this statement. Susan T. Anthony and Geraldine Redican-Bigott, Assistant Directors; Shaunessye D. Curry; Helen T. Desaulniers; Margaret J. Weber; and Craig H. Winslow made key contributions to this statement. To assess the clarity, completeness, and accuracy of written documents, we compiled a list of all available CMS-issued Part D benefit publications intended to inform beneficiaries and their advisers and selected a sample of 6 from the 70 CMS documents available, as of December 7, 2005, for in- depth review, as shown in Table 1. The sample documents were chosen to represent a variety of publication types, such as frequently asked questions and fact sheets available to beneficiaries about the Part D benefit. We selected documents that targeted all beneficiaries or those with unique drug coverage concerns, such as dual-eligibles and beneficiaries with Medigap plans. To determine the accuracy and completeness of information provided regarding the Part D benefit, we placed a total of 500 calls to the 1-800- MEDICARE help line. We posed one of five questions about the Part D benefit in each call, so that each question was asked 100 times. Table 2 summarizes the questions we asked and the criteria we used to evaluate the accuracy of responses. We received written comments on a draft of our report from CMS (see app. III). CMS said that it did not believe our findings presented a complete and accurate picture of its Part D communications activities. CMS discussed several concerns regarding our findings on its written documents and the 1-800-MEDICARE help line. However, CMS did not disagree with our findings regarding the Medicare Web site or the role of SHIPs. CMS also said that it supports the goals of our recommendations and is already taking steps to implement them, such as continually enhancing and refining its Web-based tools. CMS discussed concerns regarding the completeness and accuracy of our findings in terms of activities we did not examine, as well as those we did. CMS stated that our findings were not complete because our report did not examine all of the agency’s efforts to educate Medicare beneficiaries and specifically mentioned that we did not examine the broad array of communication tools it has made available, including the development of its network of grassroots partners throughout the country. We recognize that CMS has taken advantage of many vehicles to communicate with beneficiaries and their advisers. However, we focused our work on the four specific mechanisms that we believed would have the greatest impact on beneficiaries—written materials, the 1-800-MEDICARE help line, the Medicare Web site, and the SHIPs. In addition, CMS stated that our report is based on information from January and February 2006, and that it has undertaken a number of activities since then to address the problems we identified. Although we appreciate CMS’s efforts to improve its Part D communications to beneficiaries on an ongoing basis, we believe it is unlikely that the problems we identified in our report could have been corrected yet given their nature and scope. CMS raised two concerns with our examination of a sample of written materials. First, it criticized our use of readability tests to assess the clarity of the six sample documents we reviewed. For example, CMS said that common multisyllabic words would inappropriately inflate the reading level. However, we found that reading levels remained high after adjusting for 26 multisyllabic words a Medicare beneficiary would encounter, such as Social Security Administration. CMS also pointed out that some experts find such assessments to be misleading. Because we recognize that there is some controversy surrounding the use of reading levels, we included two additional assessments to supplement this readability analysis—the assessment of design and organization of the sample documents based on 60 commonly recognized communications guidelines and an examination of the usability of six sample documents, involving 11 beneficiaries and 5 advisers. Second, CMS expressed concern about our examination of the usability of the six sample documents. The participating beneficiaries and advisers were called on to perform 18 specified tasks, after reading the selected materials, including a section of the Medicare & You handbook. CMS suggested that the task asking beneficiaries and advisers to calculate their out-of-pocket drug costs was inappropriate because there are many other tools that can be used to more effectively compare costs. We do not disagree with CMS that there are a number of ways beneficiaries may complete this calculation; however, we nonetheless believe that it is important that beneficiaries be able to complete this task on the basis of reading Medicare & You, which, as CMS points out, is widely disseminated to beneficiaries, reaching all beneficiary households each year. In addition, CMS noted that it was not able to examine our detailed methodology regarding the clarity of written materials—including assessments performed by one of our contractors concerning readability and document design and organization. We plan to share this information with CMS. Finally, CMS took issue with one aspect of our evaluation of the 1-800- MEDICARE help line. Specifically, CMS said the 41 percent accuracy rate associated with one of the five questions we asked was misleading, because, according to CMS, we failed to analyze 35 of the 100 responses. However, we disagree. This question addressed which drug plan would cost the least for a beneficiary with certain specified prescription drug needs. We analyzed these 35 responses to this question and found the responses to be inappropriate. The CSRs would not provide us with the information we were seeking because we did not supply personal identifying information, such as the beneficiary’s Medicare number or date of birth. We considered such responses inappropriate because the CSRs could have answered this question without personal identifying information by using CMS’s Web-based prescription drug plan finder tool. Although CMS said that it has emphasized to CSRs, through training and broadcast messages, that it is permissible to provide the information we requested without requiring information that would personally identify a beneficiary, in these 35 instances, the CSR simply told us that our question could not be answered. CMS also said that the bulk of these inappropriate responses were related to our request that the CSR use only brand-name drugs. This is incorrect—none of these 35 responses were considered incorrect or inappropriate because of a request that the CSR use only brand-name drugs—as that was not part of our question. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Today's hearing focuses on Medicare Part D, the program's new outpatient prescription drug benefit. On January 1, 2006, Medicare began providing this benefit, and beneficiaries have until May 15, 2006, to enroll without the risk of penalties. The Centers for Medicare & Medicaid Services (CMS), which administers the Part D benefit, has undertaken outreach and education efforts to inform beneficiaries and their advisers. GAO was asked to discuss how CMS can better ensure that Medicare beneficiaries are informed about the Part D benefit. This testimony is based on Medicare: CMS Communications to Beneficiaries on the Prescription Drug Benefit Could Be Improved, GAO-06-654 (May 3, 2006). Information given in the six sample documents that GAO reviewed describing the Part D benefit was largely complete and accurate, although this information lacked clarity. First, about 40 percent of seniors read at or below the fifth-grade level, but the reading levels of these documents ranged from seventh grade to postcollege. Second, on average, the six documents we reviewed did not comply with about half of 60 common guidelines for good communication. For example, the documents used too much technical jargon and often did not define difficult terms. Moreover, 16 beneficiaries and advisers that GAO tested reported frustration with the documents' lack of clarity and had difficulty completing the tasks assigned to them. Customer service representatives (CSRs) answered about two-thirds of the 500 calls GAO placed to CMS's 1-800-MEDICARE help line accurately and completely. Of the remainder, 18 percent of the calls received inaccurate responses, 8 percent of the responses were inappropriate given the question asked, and about 3 percent received incomplete responses. In addition, about 5 percent of GAO's calls were not answered, primarily because of disconnections. The accuracy and completeness of CSRs' responses varied significantly across the five questions. For example, while CSRs provided accurate and complete responses to calls about beneficiaries' eligibility for financial assistance 90 percent of the time, the accuracy rate for calls concerning the drug plan that would cost the least for a beneficiary with specified prescription drug needs was 41 percent. For this question, the CSRs responded inappropriately for 35 percent of the calls by explaining that they could not identify the least costly plan without the beneficiary's personal information--even though CSRs had the information needed to answer the question. The time GAO callers waited to speak with CSRs also varied, ranging from no wait time to over 55 minutes. For 75 percent of the calls--374 of the 500--the wait was less than 5 minutes. The Part D benefit portion of the Medicare Web site can be difficult to use. GAO's test of the site's overall usability--the ease of finding needed information and performing various tasks--resulted in scores of 47 percent for seniors and 53 percent for younger adults, out of a possible 100 percent. While there is no widely accepted benchmark for usability, these scores indicate that using the site can be difficult. For example, the prescription drug plan finder was complicated to use and some of its key functions, such as "continue" and "choose a drug plan," were often not visible on the page without scrolling down. |
BEP, a bureau of the Department of the Treasury, buys currency paper from a private company and prints the nation’s currency at production facilities in Washington, D.C., and Fort Worth, Texas. According to BEP data, the currency paper contract amounts to about $115 million per year. Currency paper is a highly specialized product that includes cotton and linen fibers as well as anticounterfeiting features to enhance the quality and security of the paper. Several agencies affect the production of currency paper. The Department of the Treasury oversees BEP’s production of currency, including its procurement of currency paper. The U.S. Secret Service, now within the Department of Homeland Security, is responsible for anticounterfeiting activities and works with BEP in assessing the security of BEP’s money production facilities and currency redesign. The Federal Reserve Board sets monetary policy for the nation, obtains new currency from BEP, and issues the new currency to the public through depository institutions. The Advanced Counterfeit Deterrence Steering Committee, which includes members from BEP, the Department of the Treasury, the U.S. Secret Service, and the Federal Reserve System recommends to the Secretary of the Treasury the anticounterfeiting features to be placed in U.S. currency. If the Secretary of the Treasury accepts these recommendations, they become part of the specifications or requirements for the currency paper. The procurement of currency paper is subject to an appropriations limitation, called the Conte Amendment, enacted in December 1987. In effect, the Conte Amendment requires that distinctive paper for U.S. currency and passports be manufactured in the United States. The amendment further prohibits the purchase of currency and passport paper from a supplier owned or controlled by a foreign entity unless the Secretary of the Treasury determines that no domestic source exists. The procurement of currency paper is also subject to another statutory limitation that prohibits the Secretary of the Treasury from entering into a contract in excess of 4 years for manufacturing distinctive currency paper. BEP changed the solicitations for the 1999 and 2003 currency paper contracts and intends to include these changes in the solicitation for the next contract, which will be awarded in 2006. Some of the changes addressed barriers we reported in 1998. These changes included the following: Switching to a 4-year contract. Previously, BEP negotiated a 1-year contract with three 1-year options, which meant that manufacturers were not assured that they would receive the contract from one year to the next. According to BEP officials, a 4-year contract creates less risk for manufacturers because the contractor is almost guaranteed to receive the contract for 4 years when the government no longer has the option to renew the contract each year. Allowing multiple awards. Previously, BEP required any bidder to bid on the entire currency paper contract. BEP divided its total currency paper requirements into several different lots and allowed companies to select the parts of the solicitation they would bid on. For example, a company could choose to bid only on the paper for the $1 and $2 bills. Thus, the contract could be awarded to two companies. Potential suppliers told BEP that, in order to begin production, they would need a long-term commitment for at least 40 percent of the contract. Allowing a 24-month mobilization period. Previously, the mobilization period—the time between the contract award date and the date for starting deliveries to BEP—was no more than 60 days. In 1998 some paper manufacturers told us that the start-up period historically allowed by BEP was not long enough for companies that are not currently manufacturing currency paper. Allowing representative rather than identical samples. Previously, companies had to produce samples during the bidding process using the same machines they would use to produce currency paper if they received the contract. BEP required these samples, which are called identical samples, so that it could determine whether the companies were capable of manufacturing paper that met its specifications. BEP now allows for representative samples during the bidding process. Representative samples are manufactured on equipment that is similar to what the company would use if it were awarded the contract. Allowing representative samples enables companies that do not currently own the required equipment to produce paper samples on another company’s equipment and avoid purchasing costly equipment until they have been awarded the contract. Domestic paper companies, for example, could use the equipment of European paper companies to produce representative samples and then acquire the appropriate equipment if they were awarded the contract. Agreeing to consider innovative financing and acquisition arrangements. Previously, solicitations did not provide any help to companies that would have had to make a considerable financial investment to purchase the equipment needed to compete for the contract. To facilitate such an investment, the 1999 and 2003 solicitations stated that BEP would “consider innovative financing and acquisition arrangements” proposed by a potential supplier, but the solicitations did not specify what these arrangements might be. BEP officials told us that these arrangements could include having the government pay for some capital equipment if the contractor repaid the government at the end of the contract. However, two of eight paper manufacturers who said they were interested in competing for the contract told us that the lack of financial assistance continues to make it difficult for them to compete for the contract. Furnishing the security thread. Previously, BEP expected potential paper manufacturers to obtain the security thread used in currency paper on their own, which some paper manufacturers cited as a barrier because the sole manufacturer of the security thread is a subsidiary of the current supplier. As a result, potential manufacturers would have had to purchase the thread and make royalty payments to that company. BEP modified the solicitations for the 1999 and 2003 contracts to indicate that it would provide the security thread that is inserted into most currency paper to other successful bidders as government- furnished property rather than requiring them to obtain the thread themselves. BEP awarded the first contracts with these changes in fiscal years 1999 and 2003. According to documents in BEP’s contract files, one company in addition to the current supplier submitted a proposal for the 1999 contract, but ultimately withdrew because, according to this company, it was unwilling to continue to expend the resources required to produce fully compliant paper samples without a contract. Four other companies expressed interest in the 1999 contract, but did not submit proposals. One company said it did not submit a proposal because it determined that the estimated capital expenditures exceeded any potential profit that might be realized over the 4-year contract period. Another company that had expressed interest in the contract said it did not submit a proposal because it was unable to obtain a commitment for the large capital investment required. Additionally, the company said the contract’s provision for ordering a wide range of paper quantities made it difficult to calculate a return on investment. A third interested company did not submit a proposal because of durability requirements for the currency paper. A fourth interested company did not give a reason for not submitting a proposal. For the 2003 contract, the current supplier was the only company to submit a proposal. Three paper companies other than the current supplier asked to receive the solicitation, but these companies took no further action. The next solicitation for the currency paper contract is expected to be issued in the fall of 2005, and the contract is scheduled to be awarded in 2006. This solicitation will include all the changes that BEP previously made, according to BEP officials. Despite the changes BEP made to the contract solicitation, paper manufacturers we surveyed in 2004 told us that significant barriers to competition remain. Specifically, the eight paper manufacturers we surveyed who said they would be interested in providing currency paper to BEP told us that the following barriers, which we reported in 1998, still exist: Security requirements for the manufacturing facility. Three of the eight manufacturers told us that implementing these security requirements—which include ensuring that all waste is accounted for, controlling access to sensitive production areas in the paper mill, and erecting physical barriers around the mill—make it difficult for them to compete for the currency paper contract because of the high costs to upgrade their facilities. Technology required to incorporate anticounterfeiting features. Three of the eight manufacturers told us that the cost of the equipment and the technical expertise necessary to insert the security thread into currency paper make it difficult for them to compete for the currency paper contract. Requirement for U.S. ownership. Three manufacturers told us that this legislative restriction, known as the Conte Amendment, continues to be a barrier because it mandates that the company that produces U.S. currency paper be domestically owned—that is, at least 90 percent U.S.- owned, according to the Department of the Treasury. Lack of financial assistance for capital investment. Although BEP has indicated that it will consider innovative financing proposals from a potential supplier, two of the eight manufacturers told us that the lack of financial assistance for capital investment continues to make it difficult for them to compete for the contract. According to BEP, under the FAR, it can make advance payments to manufacturers for capital investment only if the manufacturer pays the money back to BEP, with interest, during the life of the contract. Length of contract. One of the eight manufacturers, who said it plans to submit a proposal for the 2006 contract, told us that the length of the contract, which is restricted by statute to 4 years, makes it difficult to compete for the currency paper contract. This manufacturer said that, to make a profit during this contract, it would need a 5-year contract and at least 40 percent of the contract. According to BEP, these five barriers continue to exist because they either are outside of BEP’s control or are essential components of producing currency paper. For example, the restriction against foreign ownership and the length of the currency paper contract are both legislative provisions that would require congressional action to change. In addition, U.S. Secret Service officials told us that there are tremendous benefits to producing U.S. currency paper inside the United States because, according to the Secret Service, it does not have the authority to oversee the security of personnel or plant facilities in a foreign country. The Secret Service further stated that, although it may be able to make agreements allowing for such oversight, it can be difficult to take quick, decisive action in a foreign country. The Secret Service also pointed out that the logistics of moving currency paper across great distances and borders would pose additional security risks. However, Secret Service officials indicated that, in their view, foreign ownership would not pose a security problem as long as the paper was produced in the United States and the employees who produced the paper had undergone background checks. BEP officials also believe that providing financial assistance for capital investment is outside of their control because, as previously mentioned, under the FAR, BEP can make advance payments to manufacturers for capital investment only if the manufacturer pays the money back to BEP, with interest, during the life of the contract. Two of the barriers to competition that paper manufacturers identified are within BEP’s control, but these barriers—the security requirements for the manufacturing facility and the technology required to insert anticounterfeiting features, such as the security thread—remain because they are essential for currency paper. Officials from BEP, the Federal Reserve Board, and the Secret Service noted that currency paper is a valuable asset that must be guarded and protected from counterfeiting. Potential security features for U.S. currency are reviewed by the Advanced Counterfeit Deterrence Steering Committee, which is made up of representatives from BEP, the Department of the Treasury, the Federal Reserve System, and the U.S. Secret Service. This committee recommends which security features should be in U.S. currency, and the Secretary of the Treasury decides which features to incorporate. These security features require that manufacturers of currency paper use advanced technology to insert anticounterfeiting features into paper. Furthermore, to ensure the security of the paper and of the anticounterfeiting features, manufacturing facilities must have greater physical security than paper mills generally. We agree with BEP that some of the remaining barriers are outside its control; however, we found that BEP’s outreach to paper manufacturers is limited and is generally done in conjunction with its other procurements. For example, BEP does not conduct industry briefings for its potential suppliers. We found that the Departments of Defense and Homeland Security hold industry briefings as frequently as possible to provide potential contractors with information and an opportunity to comment on future solicitations and procurements. BEP's outreach to potential paper manufacturers generally consists of publishing its draft currency paper solicitation in Federal Business Opportunities and waiting for the paper manufacturers to contact them. One paper manufacturer we surveyed commented that it was unaware of the solicitation for the 2003 contract. In commenting on a draft of this report, BEP stated that, in addition to the outreach efforts we describe, it is pursuing other outreach efforts. For example, BEP stated that it attends fairs and banknote conferences where potential suppliers are consulted to determine if their company has an interest in contracting with BEP for various currency materials, primarily currency paper, inks, and counterfeit deterrent features. The FAR states that an agency’s contracting officer is responsible for evaluating the reasonableness of the offered prices to ensure that the final price is fair and reasonable. The FAR does not define “fair and reasonable,” but establishes various techniques and procedures for a contracting officer to use in evaluating prices. Furthermore, the contract pricing reference guidance available from the Department of Defense (DOD) discusses the application of these requirements. For a price to be fair to the buyer, it must be in line with either the fair market value of the product or the total allowable cost of providing the product that would be incurred by a well- managed, responsible firm using reasonably efficient and economical methods of performance, plus a reasonable profit. To be fair to the seller, a price must be realistic in terms of the seller's ability to satisfy the terms and conditions of the contract. A reasonable price, according to the DOD guidance, is a price that a prudent and competent buyer would be willing to pay, given available data on market conditions, such as supply and demand, general economic conditions, and competition. For the currency paper contract, there is currently only one buyer and one seller, domestically. As a result, pricing is established through negotiation. The FAR further states that the contracting officer may use any of several analysis techniques to ensure that the final price is fair and reasonable. The techniques the officer uses depends on whether adequate price competition exists. For the 1999 contract, BEP determined that adequate price competition existed because of the expectation that at least one additional meaningful proposal would be submitted. Consequently, BEP used price analysis—a comparison of the two proposals—as a basis for determining that the 1999 contract prices, which totaled $207 million, were fair and reasonable. BEP also compared the proposed prices with an independent government cost estimate, which BEP prepared for the contract. For the 2003 contract, BEP determined that adequate price competition did not exist because, although several companies requested copies of the solicitation, only the current supplier submitted a proposal. Under such circumstances, the FAR requires agencies to use one or more of several proposal analysis techniques to ensure that the final price is fair and reasonable. BEP took the following steps to determine its prenegotiation pricing objective: Obtaining certified cost data from the current supplier, as required by FAR 15.403-4. Requesting that the Defense Contract Audit Agency (DCAA) audit the current supplier’s price proposal. DCAA found that the current supplier’s proposal was acceptable as a basis for negotiating a fair and reasonable price. To perform its audit, DCAA used the applicable requirements contained in the FAR, the Treasury’s Acquisition Procurement Regulations, and the Cost Accounting Standards. BEP officials said they also independently reviewed and assessed the current supplier’s proposed costs and did not rely solely on DCAA’s findings. Establishing a technical analysis team to examine various aspects of the current supplier’s manufacturing process that affect price. The technical analysis concentrated on production yield factors, paper machine speeds and capacity, and labor requirements, among other things. According to BEP, these areas have a major impact on cost and are an essential part of a cost analysis. Performing a price analysis using comparison with previous contract prices for currency paper to verify that the overall price offered was fair and reasonable. In 1998, we recommended that BEP arrange for postaward audits of the current supplier’s costs and ensure that the supplier maintains acceptable cost accounting and estimating systems for future contracts. The purpose of a postaward audit is to determine if the price, including the profit, negotiated for the contract was increased by a significant amount because the contractor furnished cost or pricing data that were not accurate, complete, or current. For the 1999 contract, a postaward audit was not required because the supplier was not required to submit cost or pricing data. Following the award of the 2003 contract, BEP requested that DCAA perform a postaward audit of the current supplier. DCAA found that the current supplier’s certified cost or pricing data were accurate, complete, and current. DCAA also performed a postaward audit of the subcontractor that provides the security thread for U.S. currency and found that the subcontractor’s data were accurate, complete, and current. Finally, DCAA reviewed the current supplier’s estimating system and found it to be adequate to provide estimated costs that are reasonable, compliant with applicable laws and regulations, and subject to applicable financial control systems. In 1998 we reported that two BEP procurement practices contributed, or could contribute, to higher-than-necessary currency paper costs. These practices included not obtaining royalty-free data rights for the security thread used in currency paper and ordering inconsistent quantities of paper. We found that BEP continues to make royalty payments for the use of the security thread and will have to do so until December 2006. We also found that BEP continues to have difficulty in accurately estimating the amount of paper it will require, but inconsistent order sizes have not yet adversely affected the prices it pays. We previously reported that a subsidiary of the current supplier holds patents for manufacturing the security thread used to deter counterfeiting. This thread is inserted into all U.S. currency denominations greater than $2. According to a BEP official, the current supplier approached BEP with the idea for the security thread in the mid-1980s, and BEP encouraged this company to develop the thread, but BEP neither entered into a research and development contract to help fund the effort, nor did it attempt to negotiate rights to that technology or technical data, according to another BEP official. Because the government did not obtain royalty-free data rights to, or fund the development of the security thread, it does not have any rights to the associated technical data and must pay for any use of the thread. The price BEP currently pays for currency paper includes the cost of royalty payments, which are generally allowable under the FAR. For the 2003 contract, these payments totaled $663,000 over 4 years. According to the current supplier, these royalty payments will end in December 2006. As a result, beginning with the next currency paper contract—which BEP expects to award at the end of 2006—BEP will not have to pay royalties for the use of the current security thread or negotiate a license to provide the thread to a second supplier. In addition, to avoid a recurrence of this situation, BEP plans to purchase, for an undetermined price, royalty-free rights to any new anticounterfeiting features that it obtains in the future from any sources. Properly written, such an agreement could enable BEP to incorporate new technology at its discretion and allow currency paper contractors to use that technology in manufacturing paper to meet the government's requirements. In addition, BEP included a special provision in the 2003 currency paper contract stating that BEP will not incorporate any new anticounterfeiting feature into U.S. currency paper unless it has negotiated an exclusive license to the feature. We also reported in 1998 that BEP actually ordered more paper than it estimated during some years. As a result, BEP paid a higher unit cost for the paper, because the price was based on the estimated amount, and therefore the contractor’s fixed costs were spread over fewer units than BEP purchased. If BEP had accurately estimated the quantity of paper it ordered, the contractor’s fixed costs would have been spread over more units, resulting in a lower per-unit price. We recommended that BEP ensure that its paper estimates more closely reflect the expected amounts needed. BEP responded that its estimates are based on the best available estimate from the Federal Reserve Board. Since 1999, BEP’s currency paper orders have remained inconsistent, but this inconsistency has not yet adversely affected BEP’s prices. Specifically, for 4 of the last 6 years, BEP’s orders were at or below the estimates the contractor used in setting its price, and therefore the orders should not have resulted in a higher price for currency paper. (See fig. 1.) However, in fiscal years 2003 and 2004, BEP’s actual orders were considerably higher than the minimum quantities estimated in the contract. In fiscal year 2003, the minimum quantity was 151 million sheets, and BEP ordered almost 280 million sheets; and in fiscal year 2004, the minimum quantity was 203 million sheets, and BEP ordered 296 million sheets. Although BEP’s order amounts exceeded the minimum quantities, the price BEP paid for currency paper was not adversely affected because of the pricing approach used by the contractor in the current contract. In its August 1996 currency paper report, BEP concluded that competition was not immediately feasible because the current supplier was the only domestic source that could supply currency paper that met BEP’s requirements. In addition, BEP estimated that it would pay $21 million to $37 million more per year for currency paper if it purchased paper from more than one supplier. These increased costs would result from, among other things, high capital equipment costs for a new supplier, according to BEP. BEP also made several recommendations, including that it continue to improve its relationship with the current supplier by working to resolve problems before they arise; continue to try to identify alternative sources for currency paper, and if a viable source of currency paper is identified, analyze the costs and economic feasibility of having two sources; and review the possible catastrophic occurrences that could interrupt currency paper supplies, and if necessary, increase the inventory of currency paper to mitigate the effects of such an occurrence. Analyzing the advantages and disadvantages of obtaining a second supplier would help BEP determine if a second supplier would be cost effective over the long term, weigh the benefits of obtaining a second supplier against the potential security and quality concerns associated with a second supplier, and ensure that BEP can maintain an adequate supply of currency paper. Obtaining a second supplier could have advantages. Economic literature shows that a key advantage of obtaining a second supplier is that it can generate competition, which helps to ensure that the buyer receives the best price possible. In general, with more competition, each individual firm has less control over the final price in the market. In contrast, a single supplier has the potential to restrict output and set market prices above competitive levels. In addition, some economic studies have found that the entry of additional firms into a market lowers prices. An additional advantage of obtaining a second supplier is that new entrants can stimulate innovation in certain markets, whereas some researchers have found that a single supplier may not be particularly innovative. Another key advantage of obtaining a second supplier could be greater assurance of a steady supply of currency paper. With more than one supplier and more than one production site, the buyer would have greater assurance of a steady supply of goods even if one site were disrupted by, for example, a strike, natural disaster, bankruptcy, or terrorist attack. This would be an important advantage for BEP, because currency paper is essential to U.S. and world commerce, and an adequate supply must be assured. Some actions have already been taken to avoid these potential problems. To mitigate a disruption to the currency paper supply, the current supplier says it could produce currency paper at two separate locations. In addition, BEP keeps about a 3-month supply of currency paper in reserve. Obtaining a second supplier could also have disadvantages. First, even though it could create competition, it might not lower prices initially because each new supplier would have expensive start-up costs (such as the capital costs of specialized paper-making equipment) and would therefore need to charge a high price for currency paper. Second, the risk of changes in product quality and design would increase with more than one supplier in more than one location. For instance, according to a physicist who specializes in paper production, two companies, given the same specifications, could produce paper of consistent strength, but would have much more difficulty adjusting for the texture of the paper, and slight differences could exist within the same specifications. Even slight changes can adversely affect a buyer such as BEP, which requires adherence to very specific technical standards. Federal Reserve Board officials told us that they are concerned that minor differences in the quality of currency paper could diminish the reputation of U.S. currency. Secret Service officials, who are responsible for protecting U.S. currency from counterfeiting, said they would need to be assured that a second supplier had proved that it could produce paper of consistent quality over a period of time because even slight variations between the papers produced by the two manufacturers could hamper their anticounterfeiting efforts and lower confidence in U.S. currency. Finally, increasing the number of suppliers, production locations, or both would increase the potential for security breaches because more people would know about the classified anticounterfeiting features incorporated in currency paper, and more sites could be vulnerable to intrusion. Federal Reserve Board officials, who are responsible for issuing U.S. currency, maintained that awarding the contract to several different suppliers could compromise the secrecy of the paper’s anticounterfeiting features because more people would have access to and could potentially disclose information about them. Finding itself relying on a single supplier in the early 1990s for the clad metal it uses to make coins, the U.S. Mint weighed the advantages and disadvantages of obtaining a second supplier and decided that the advantages outweighed the disadvantages. To obtain a second supplier, the Mint worked closely with a new company and allowed it to begin producing a small amount of material. Initially, the Mint’s second supplier had some difficulty producing a product of consistent quality, and the unit costs of the material were higher than the original supplier’s unit costs because the second supplier was producing smaller quantities. But as the quality of the material improved, the company began to increase its production for the Mint, and it now produces 55 percent of the metal that the Mint uses to make coins. According to Mint officials, the use of a second supplier enabled the Mint to maintain a steady supply of material when the demand for coins spiked in 1999 and 2000 (because coins were collected for the new millennium) and when each supplier experienced labor strikes. Mint officials also told us that they believe that obtaining a second supplier for clad material initially increased the Mint’s costs, but they were not able to quantify the amount of the increase. Nonetheless, according to Mint officials, the price for clad metal has decreased since the Mint began using a second supplier. In commenting on a draft of this report, the Federal Reserve Board noted that, regardless of price issues, the issues of security and quality are not the same for clad metal and currency paper. Obtaining effective competition for the currency paper contract continues to be a challenge for BEP, despite the changes it has made and plans to continue making to its contract solicitations. Barriers to competition remain, and the current supplier continues to be the sole supplier of currency paper. We agree with BEP that some of the remaining barriers are outside its control or are essential for security purposes, and we recognize that the current supplier has generally provided BEP with a steady, timely supply of paper that has met its requirements for the past 125 years. However, we believe the uniqueness of the currency paper procurement and the disadvantages of having a single supplier are sufficient to warrant a regular effort on BEP’s part to reach out to paper manufacturers before issuing solicitations to help BEP determine what additional steps should be taken to encourage competition for the currency paper contract. Although BEP concluded in its August 1996 currency paper study that competition was not immediately feasible because the current supplier was the only domestic source of currency paper that could meet its requirements, BEP has not weighed the advantages and disadvantages of obtaining a second supplier—including the impact on the cost, security, quality, and adequacy of the currency paper supply—since 1996. Consequently, while BEP can demonstrate that it is receiving a fair and reasonable price for currency paper, it is unclear if that price is higher or lower than the price BEP would pay if there were a second supplier. But cost is not the only factor in deciding whether or not to use a second supplier. The security and integrity of the paper, and of U.S. currency, are also important. A second supplier must be able to demonstrate that it can produce paper that contains the same security features and technical specifications as the current paper. Slight changes to the quality and make- up of currency paper have the potential to hamper anticounterfeiting efforts and could result in an overall loss of confidence in U.S. currency. Analyzing the advantages and disadvantages of obtaining a second supplier would help BEP assess whether a second supplier of currency paper is needed to ensure an adequate supply of quality currency paper at a fair and reasonable price. To obtain the views of paper manufacturers on barriers to competition and to determine if there is a need for a second supplier of currency paper, we are recommending that the Secretary of the Treasury direct the Director of BEP to take the following two actions: Before issuing solicitations for currency paper contracts in the future, increase outreach activities with paper manufacturers to allow them to provide their views on the barriers to competition, identify the steps BEP should take to address these barriers, and comment on the solicitations. Determine if there is a need to obtain a second supplier for currency paper by preparing an analysis of the advantages and disadvantages of obtaining a second supplier of currency paper, including the impact on the cost, security, quality, and adequacy of the currency paper supply. If the analysis determines that there is a need to obtain a second supplier, the Secretary should then determine what steps are necessary to obtain a second supplier for currency paper. We provided the BEP, the Mint, and the Federal Reserve Board with drafts of this report for their review and comment. These agencies generally agreed with our findings and provided technical comments, which we incorporated as appropriate. In written comments, BEP commented that our draft report does not recognize all of its outreach efforts to paper manufacturers and that the royalty payments associated with purchasing currency paper are an allowable expense under FAR. We incorporated this additional information in our report as appropriate. BEP also agreed with our recommendations and described its plans to implement them. BEP’s comments are provided in appendix III. We are sending copies of this report to the cognizant congressional committees; the Chairman of the Board of Governors of the Federal Reserve System; the Secretary of the Treasury; the Directors of BEP and the Mint; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at siggerudk@gao.gov or Tammy Conquest at conquestt@gao.gov. Alternatively, I can be reached at (202) 512-2834. Major contributors to this report are listed in appendix IV. As appropriate, the Bureau of Engraving and Printing (BEP) has the Defense Contract Audit Agency (DCAA) perform audits. Arrange for postaward audits of the current supplier’s costs. When required, BEP has DCAA conduct postaward audits of the current contractor’s costs. Include data and analyses in the currency paper procurement record that demonstrate the benefits the government is to receive when it approves profit levels that are aimed at recognizing or providing an incentive for capital investments. When required, BEP plans to comply with the FAR. To the extent possible, make more extensive use of price analysis to determine the fairness and reasonableness of prices, including the collection of data from foreign countries on their currency prices and data on similar supplies purchased by other agencies, such as paper for passports and money orders. BEP stated that a comparison of the price of U.S. currency paper with the price of foreign currency paper or money order and passport paper would not be a valid comparison because of technical differences. Ensure that all future currency paper procurements reflect the expected amounts of paper needed and that orders against contracts are for consistent amounts. BEP bases the amount of paper needed on the best available estimate provided by the Federal Reserve System. Ensure that the government obtains royalty-free data rights to any future security measures incorporated into currency paper. BEP plans to obtain royalty-free data rights to all future security measures that it incorporates into currency paper. To determine the steps the Bureau of Engraving and Printing (BEP) took to encourage competition for the 1999 and 2003 currency paper contracts, we interviewed BEP officials and reviewed the changes BEP made to the contract solicitations. To determine the results of these efforts, we reviewed the solicitations for the 1999 and 2003 contracts and sent a questionnaire to 15 domestic and foreign manufacturers of cotton-based security paper to determine the factors that have made it difficult for them to compete for the currency paper contract. We used a questionnaire that was similar to the questionnaire used for our 1998 report, allowing us to compare responses for the two time periods. Our survey universe consisted of manufacturers we had surveyed for our 1998 report, manufacturers identified by the American Forest and Paper Association, manufacturers identified by BEP as having expressed interest in the currency paper contract, and the current supplier. We received responses from 14 of the 15 manufacturers and made several attempts to obtain a response from the one manufacturer who did not respond to our survey. We also performed structured telephone interviews with all 14 manufacturers to clarify their survey responses. Our primary variable for analysis was interest in providing currency paper to BEP. We considered the eight manufacturers who responded that they were “very interested” or “somewhat interested” in providing currency paper to BEP as our most important group for the purposes of this study because they have a stated interest in supplying paper to BEP. We reviewed economics literature and interviewed several academic experts to determine the relevant barriers to competition. Finally, we analyzed the Conte Amendment, the statute limiting the procurement of distinctive currency paper to a 4-year contract, and other applicable procurement laws and regulations to identify requirements affecting the procurement of currency paper. To determine the steps BEP took to determine that the prices it paid for currency paper under the 1999 and 2003 contracts were fair and reasonable, we reviewed documents in BEP’s contract files for the 1999 and 2003 contracts. We reviewed the process BEP must follow to determine fair and reasonable pricing. We reviewed the prenegotiation memorandums and negotiation summaries from the contract files and interviewed BEP procurement officials to determine what cost and price analysis activities BEP undertook to establish a fair and reasonable price. We then compared these actions with the requirements for cost and price analysis techniques under FAR part 15.404-1. We also obtained and reviewed audits of the current supplier that BEP requested from the Defense Contract Audit Agency and that have been issued since 1998. To determine the extent to which BEP has analyzed the advantages and disadvantages of obtaining a second supplier for currency paper, we reviewed BEP’s most recent currency paper study, which was issued in 1996. We also interviewed several industry analysts and academic experts, and reviewed relevant economics literature. Although economic research on competition in government contracting is abundant, it has never been applied to the currency paper market. Therefore, we reviewed economic studies of other markets to determine the advantages and disadvantages of obtaining a second supplier. We also interviewed officials from BEP, the U.S. Secret Service, and the Federal Reserve System to obtain their views on the implications of obtaining multiple suppliers for currency paper. To gain additional perspective on the potential effects of obtaining a second supplier for currency paper, we interviewed former and current officials from the U.S. Mint about their experiences with a second supplier. The Mint was not able to provide us with financial data to demonstrate whether the price it paid for clad material changed after it began using a second supplier. We performed our work in Washington, D.C., from August 2004 through April 2005 in accordance with generally accepted government auditing standards. In addition to the individuals named above, Robert Ackley, Tim DiNapoli, Elizabeth Eisenstadt, Barbara El Osta, Heather Halliwell, Susan Michal- Smith, Terry Richardson, and John W. Shumann made key contributions to this report. | For over 125 years, the Bureau of Engraving and Printing (BEP), within the Department of the Treasury, has relied on a single contractor to supply the paper for U.S. currency. Such a long-term contracting relationship could contribute to higher costs and other risks. Another federal agency that relied on a single contractor, the U.S. Mint, decided to obtain a second supplier for coin metal. In solicitations for currency paper contracts in 1999 and 2003, BEP took steps to address barriers to competition that GAO had identified in 1998 through a survey of paper manufacturers. This report updates GAO's 1998 report using data from a second survey. It addresses (1) the changes BEP made to encourage competition and the results of its efforts, (2) the steps BEP took to ensure that it paid fair and reasonable prices, and (3) the analysis BEP has done of the advantages and disadvantages of obtaining a second supplier. To encourage competition for the 1999 and 2003 contracts, BEP modified its solicitations to, among other things, indicate that it would provide bidders with the security thread that is inserted into most currency paper and extend the time for initial deliveries. For the 1999 contract, one additional supplier submitted an initial proposal but later withdrew it, and for the 2003 contract, only the current supplier submitted a proposal. This company remains the sole supplier of U.S. currency paper. According to paper manufacturers, several barriers to competition remain, including the high capital costs of and technological requirements for producing currency paper. BEP said it has not addressed these barriers because the requirements are either essential to preserve the security of currency paper or they are outside BEP's control (e.g., anticounterfeiting features are recommended by a federal committee). While some of the remaining barriers are outside BEP's control, BEP's outreach to paper manufacturers has been limited. For example, BEP does not meet regularly with them, as the Departments of Defense and Homeland Security meet with potential suppliers of their procurements, to identify additional steps that could be taken to encourage competition. To the extent that BEP has reached out to paper manufacturers, it has generally done so in conjunction with other BEP procurements. For the contracts awarded in 1999 and 2003, BEP took several steps, consistent with the Federal Acquisition Regulation's requirements, to determine that the prices it paid under these contracts were fair and reasonable. For the 1999 contract, it used price analysis (a comparison of two proposals) to determine that the two proposals it initially received were fair and reasonable. This analysis was sufficient because BEP had determined that adequate price competition existed. For the 2003 contract, BEP performed several cost analysis activities to ensure that the final agreed-to price was fair and reasonable, since the current supplier was the only company that submitted a proposal. For example, BEP obtained certified cost and pricing data from the current supplier, requested an audit review of the current supplier's price proposal, and established a technical analysis team to examine steps in the current supplier's manufacturing process that affect price. BEP also arranged for postaward audits of the current supplier. BEP has not analyzed the advantages and disadvantages of obtaining a second supplier of currency paper since 1996. At that time, it concluded that the costs would outweigh the benefits, but it did not analyze the long-term effects. As a result, it does not know how a second supplier would affect the costs, quality, security, and supply of currency paper over time. Analyzing the advantages and disadvantages of obtaining a second supplier would help BEP determine the need for one. |
State child welfare systems consist of a complicated network of policies and programs designed to protect children. With growing caseloads over the past decade, the systems’ ability to keep pace with the needs of troubled children and their families has been greatly taxed. From fiscal year 1984 through 1995, the foster care population grew from an estimated 276,000 children to 494,000. In 1995, about 261,000 of these children were supported by federal funds through title IV-E of the Social Security Act. The federal government plays an important role in financing foster care and establishes minimum procedural requirements for the placement process. As required by the Adoption Assistance and Child Welfare Act of 1980 (P.L. 96-272), states must make reasonable efforts to prevent or eliminate the need for removing children from their homes. Once a child is removed from the home, the state must also provide services to the family and the child with the goal of reuniting them. If reunification is not possible, the state is to find permanent placement for the child outside the family home. To guide the permanency planning process by which a state is to find permanent placements for foster children, the act also requires that the state develop a case plan for each child. Each case plan must be reviewed at least every 6 months and, within 18 months, a permanency hearing must be held to determine the future status of the child. If a final decision is not made at this hearing, federal law provides that additional hearings must be held at least every 12 months. Options for the child’s future status can include, but are not limited to, reuniting the child with his or her family, placing the child for adoption, continuing temporary foster care, or continuing foster care permanently or long term because of the child’s special needs or circumstances. Increasingly, children are being placed with their own relatives, who then may sometimes receive foster care subsidies. The prolonged stays of children in foster care have prompted 26 states to enact laws or policies to shorten to less than the federally allowed 18 months the time between entering foster care and the first permanency hearing. Twenty-three of these states have enacted such laws, while three others have done so by administrative policy. A majority of these states require the hearing within 12 months. In two states, the shorter time frame applies only to younger children. Colorado requires that the permanency hearing be held within 6 months for children under age 6, and Washington requires the hearing to be held within 12 months for children aged 10 or younger. The remaining 24 states and the District of Columbia have statutes consistent with the federal requirement of 18 months. (For a description of the 26 state statutes, policies, and time requirements, see app. I.) The state laws, like federal law, do not require that a final decision be made at the first hearing. Ohio and Minnesota, however, do require that a permanency decision be determined after a limited extension period. Ohio, for example, requires a permanency hearing to be held within 12 months, with a maximum of two 6-month extensions. At the end of that time, a permanent placement decision must be made. According to officials in Ohio’s Office of Child Care and Family Services, the requirement for earlier permanency hearings was made to expedite the permanent placement process and reduce the time children spend in foster care. State officials also believed, however, that this requirement may have unintentionally resulted in increasing the number of children placed in long-term foster care because other placement options could not be developed. State data, in part, confirmed this observation. While long-term foster care placements for children supported with state funds dropped from 1,301 in 1990 to 779 in 1995, long-term placements for children from low-income families who are supported in part with federal funds rose from 1,657 to 2,057 in the same period. Although the states we reviewed did not systematically evaluate the impacts of their initiatives, they have implemented a variety of operational and procedural changes to expedite and improve the permanency process. The states reported that these actions have improved the lives of some children by (1) reuniting them with their families more quickly; (2) expediting the termination of parental rights when reunification is not feasible, making it possible for child welfare agencies to begin looking for an adoptive home sooner; or (3) reducing the number of different foster care placements in which children live. Some states implemented low-cost, creative methods for financing and providing services that address specific barriers to reuniting families. Arizona’s Housing Assistance Program focused on families in which the major barrier to reunification was inadequate housing for the family. According to reports and data from the Arizona Department of Economic Security, between 1991 and 1995, as the result of the program, 939 children were reunited with their families, representing almost 12 percent of the children reunified during this period. State officials estimated that this program saved the state over $1 million in foster care-related costs between 1991 and 1995. Arizona and Kentucky placed special emphasis on expediting the process by which parental rights could be terminated. Arizona’s Severance Project focused on cases in which termination of parental rights was likely or reunification services were not warranted and for which a backlog of cases had developed. In April 1986, the state enacted a law providing funds for hiring severance specialists and legal staff to work on termination cases. The following year, in 1987, the state implemented the Arizona State Adoption Project, which focused on identifying additional adoptive homes, including recruiting adoptive parents for specific children and contracting for adoptive home recruitment services. State officials reported that the Adoption Project resulted in a 54-percent increase in the number of new homes added to the state registry in late 1987 and 1988. In addition, they noted that the Severance Project contributed to a more than 32-percent reduction in the average length of stay between entering care and the filing of the termination petition for fiscal years 1991 through 1995. children available for adoption rose, the state was forced to focus its efforts on identifying potential adoptive homes and shifted its emphasis to strategies to better inform the public about the availability of adoptive children. Some states are experimenting with concurrent planning. Under this approach, child welfare officials work toward reuniting a family while developing an alternate plan for permanently removing the child if reunification efforts fail. By working on the two plans simultaneously, caseworkers reduce the time needed to prepare the paperwork for terminating parental rights if reunification efforts fail. Under a concurrent planning approach, caseworkers emphasize to the parents that if they do not adhere to the requirements set forth in their case plan, parental rights can be terminated. Some state officials attributed obtaining quicker permanent placements in part to parents making more concerted efforts to make the changes needed to have their children returned home. Colorado began using concurrent planning formally in 1994 for children under age 6 in conjunction with the implementation of the law requiring that for children under age 6, the permanency hearing must be held within 6 months of the child’s entering care. The program has been implemented in five counties. Preliminary data from an ongoing evaluation in Jefferson County shows that 65 out of 78 children, or 87 percent, achieved permanent placement within 1 year of initial placement as compared with 50 of 71 children, or 70 percent, in a control group. State Department of Human Services officials told us that concurrent planning was a key factor that contributed to the success of children’s being placed more quickly in permanent homes. All decisions regarding both the temporary and final placement of foster children come through states’ court systems. Therefore, Hamilton County, Ohio, juvenile court officials focused attention on the court’s involvement in achieving permanency more quickly by developing new procedures to expedite case processing. To do so, in 1985, they revised court procedures by (1) designating lawyers specially trained in foster care issues as magistrates to hear cases; (2) assigning one magistrate to each case for the life of that case to achieve continuity; and (3) agreeing at the end of every hearing—with all participants present—to the date for the next hearing. According to court officials, the county saved thousands of dollars because it could operate three magistrates’ courtrooms for about the cost of one judge’s courtroom. Also, a report on court activities indicated that because of these changes, between 1986 and 1990, the number of children (1) placed in four or more different foster care placements decreased by 11 percent and (2) the percentage of children leaving temporary and long-term foster care in 2 years or less increased from 37 to 75 percent. Our efforts to assess the overall impact of these initiatives were hampered by the absence of evaluation data. We found that the states generally did not conduct systematic evaluations of their programs, and outcome information was often limited to state reports and the observations of state officials. Although many of these efforts reported improvements, for example, in speeding the termination of parental rights once this goal was established, the lack of comparison groups or quality data from the period before the initiative made it difficult to reach definitive conclusions about the initiatives’ effectiveness. States increased their chances of successfully developing and implementing initiatives when certain key factors were a part of the process. When contemplating changes, state officials had to take into consideration the intricacies of the foster care process, the inherent difficulty that caseworkers and court officials face when deciding whether a child should be returned home, and the need, in some cases, for caseworkers and judges to recognize that termination of parental rights should be pursued. When Kentucky officials, for example, initiated a project to shorten the process for terminating parental rights, they faced the challenge of changing the way caseworkers and members of the legal system had viewed termination of parental rights. Many caseworkers saw the termination of parental rights as a failure on their part because they were not able to reunify the family. As a result, they seldom pursued termination and instead kept the children in foster care. In addition, judges and lawyers were often not sufficiently informed of the negative effects on children who do not have permanent homes. Thus, as part of this project, newsletters and training were provided about the effects on children of delaying termination of parental rights. Officials in the states we reviewed recognized that improving the permanency planning process requires concerted time and effort, coordination, and resources. These officials identified several critical, often interrelated factors required to meet these challenges. These included (1) long-term involvement of officials in leadership positions; (2) involvement of key stakeholders in developing consensus and obtaining buy-in about the problem and its solution; and (3) the availability of resources to plan, implement, and sustain the project. With the expected rise in foster care caseloads through the start of the next century further straining state and federal child welfare budgets, increasing pressure will be placed on states to develop initiatives to move children into permanent homes more quickly. Many of these initiatives will need to address the difficult issues of deciding under what circumstances to pursue reunification and what time period is appropriate before seeking the termination of parental rights. We found promising initiatives for changing parts of the permanency process so that children can be moved from foster care into permanent placements more quickly. Developing and successfully implementing these innovative approaches takes time and often challenges long-standing beliefs. To succeed, these initiatives must look to local leadership involvement, consensus building, and sustained resources. As these initiatives become a part of the complex child welfare system, however, they can also create unintended consequences. Identifying appropriate cases for the expeditious termination of parental rights and processing them faster—thereby making more children available for possible adoption—can create difficulties if efforts to develop more adoptive homes have not received equal emphasis. We also observed that a critical feature of these initiatives was often absent: Many of them lacked evaluations designed to assess the impact of the effort. The availability of evaluation information from these initiatives would not only point to the relative success or failure of an effort but also help identify unintended outcomes. The lack of program and evaluation data will continue to hinder the ability of program officials and policymakers to fully understand the overall impact of these initiatives. Mr. Chairman, this concludes my formal remarks. I will be happy to answer any questions you or other members of the Subcommittee may have. For more information on this testimony, please call Gale C. Harris, Assistant Director, at (202) 512-7235. Other major contributors are David D. Bellis, Social Science Analyst; Shellee S. Soliday and Octavia V. Parks, Senior Evaluators; Julian Klazkin, Senior Attorney; and Rathi Bose, Evaluator. Ariz. Rev. Stat. Ann., Section 8-515.C.(West Supp. 1996) Colo. Rev. Stat., Section 19-3-702(1)(Supp. 1996) Conn. Gen. Stat. Ann., Section 46b-129(d),(e) (West 1995) Ga. Code Ann., Section 15-11-419 (j),(k)(1996) 705 Ill. Comp. Stat. Ann., 405/2-22(5)(West Supp. 1996) Ind. Code Ann., Section 31-6-4-19(c)(Michie Supp. 1996) Iowa Code Ann., Section 232.104 (West 1994) Kan. Stat. Ann., Section 38-1565(b),(c)(1995) La. Ch. Code Ann., Arts. 702,710(West 1995) Mich. Stat. Ann., Section 27.3178(598.19a) (Law Co-op Supp. 1996) Minn. Stat. Ann., Section 260.191 Subd. 3b(West Supp. 1997) Miss. Code Ann., Section 43-21-613 (3)(1993) New Hampshire Court Rules Annotated, Abuse and Neglect, Guideline 39 (Permanency Planning Review) N.Y. Jud. Law, Section 1055(b)(McKinney Supp. 1997) Ohio Rev. Code Ann., Sections 2151.353(F) 2151.415 9 (A) (Anderson 1994) 42 Pa. Cons. Stat. Ann., Section 6351(e-g)(West Supp. 1996) (continued) R.I. Gen. Laws, Section 40-11-12.1(1990) S.C. Code Ann., Section 20-7-766(Law. Co-op. Supp. 1996) Utah Code Ann., Section 78-3a-312, (1996) Va. Code Ann., Section 16.1-282(Michie 1996) Wash. Rev. Code Ann., Section 13.34.145(3)(4) (West Supp. 1997) W. Va. Code Sections 49-6-5, 49-6-8(1996) Wis. Stat. Ann., Sections 48.355(4); 48.38; 48.365(5)(West 1987) Wyo. Stat. Ann., Section 14-6-229 (k)(Michie Supp. 1996) Michigan’s time frame to hold the permanency hearing was calculated by adding the days needed to conduct the preliminary hearing, trial, dispositional hearing, and the permanency hearing. Virginia’s time frame to hold the permanency hearing was calculated by adding the number of months required to file the petition to hold the permanency hearing plus the number of days within which the court is required to schedule the hearing. Child Welfare: Complex Needs Strain Capacity to Provide Services (GAO/HEHS-95-208, Sept. 26, 1995). Child Welfare: Opportunities to Further Enhance Family Preservation and Support (GAO/HEHS-95-112, June 15, 1995). Foster Care: Health Needs of Many Young Children Unknown and Unmet (GAO/HEHS-95-114, May 26, 1995). Foster Care: Parental Drug Abuse Has Alarming Impact on Young Children (GAO/HEHS-94-89, Apr. 4, 1994). Residential Care: Some High-Risk Youth Benefit, But More Study Needed (GAO/HEHS-94-56, Jan. 28, 1994). Foster Care: Services to Prevent Out-of-Home Placements Are Limited by Funding Barriers (GAO/HRD-93-76, June 29, 1993). Foster Care: State Agencies Other Than Child Welfare Can Access Title IV-E Funds (GAO/HRD-93-6, Feb. 9, 1993). Foster Care: Children’s Experiences Linked to Various Factors; Better Data Need (GAO/HRD-91-64, Sept. 11, 1991). Child Welfare: Monitoring Out-of-State Placements (GAO/HRD-91-107BR, Sept. 3, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed: (1) state efforts to reduce the time frames within which hearings must be held to determine permanent placements for foster children; (2) state initiatives designed to expedite permanent placements for foster children and the effectiveness of these initiatives; and (3) key factors that facilitate changes in this part of the child welfare system. GAO noted that: (1) signaling the importance of permanent placement to the well-being of children, 26 states have established more stringent requirements on the timing of the first permanency hearing than has federal law, which requires a hearing within 18 months; (2) in addition, the states it reviewed undertook operational and procedural initiatives to expedite the permanent placement process as well as make well-informed permanent placement decisions; (3) although most of these states did not systematically evaluate their initiatives, they reported that many of the initiatives have contributed to reducing the time spent in foster care or decreasing the total number of foster placements made for a child; (4) state officials reported that the key factors in successfully implementing these initiatives were the long-term involvement of key officials, an extended commitment of resources, and the need for a change in perspective of case workers and judges in order to recognize that, in some cases, termination of parental rights is the best solution for the child's future. |
Grants and cooperative agreements are assistance instruments used to transfer money, property, or services to accomplish a public purpose. The difference between the two instruments relates to the amount of involvement between the agency and the recipient during performance: when substantial involvement is not anticipated, State uses a grant; otherwise, it uses a cooperative agreement. For the purposes of this report, the term “grants” refers to both grants and cooperative agreements. State’s grants vary greatly by size and recipient—from grants of less than $100 to help cover an individual’s travel expenses, to multi-million dollar grants to, for example, international nongovernmental organizations for democracy-building programs. A/OPE sets department-wide policies related to grants management, and individual bureaus may also develop their own specific policies to supplement those from A/OPE. Managing a grant involves a variety of State officials, often from multiple bureaus and posts. The principal grants officials include the following: The grants officer (GO), who is ultimately responsible for overseeing the grant. The grants officer representative (GOR), who often has program implementation expertise and assists the GO in overseeing a grant. The program officer, who, if the GO is from a different bureau, may provide programmatic expertise, primarily during the preaward phase. For example, a program officer may design the grant announcement and assist in selecting recipients. In some cases, the program officer may be designated as the GOR once the grant is awarded to a recipient. The budget officer, who is responsible for ensuring that the appropriated funds are drawn down correctly. Some GOs and GORs also reported using a grants management specialist or other staff, such as interns, to help them manage certain aspects of their portfolios. The various grants officials involved in the management of a grant may be from different bureaus as well as different locations. Twenty-seven bureaus and offices within State, including the U.S. Mission to the United Nations, have grant-making authority or grant oversight responsibilities. Posts, including embassies and consulates overseas, may also make grants. Ten bureaus and posts accounted for the majority of federal assistance obligations that State made in fiscal year 2012 (see table 1). For those GOs located in Washington, D.C., the GOR is usually located in the principal place of performance, which may be at State headquarters in Washington, D.C.—for cultural exchange programs, for example—or overseas. Eighteen of the 27 bureaus with grant-making authority have their own GOs. Those that do not have GOs or whose GOs do not have a high enough grant-making authority rely on the Office of Acquisitions Management to fulfill the GO role. This office provides a full range of grant management services, including planning, negotiations, cost and price analysis, and administration. In those instances, the bureau requesting the grant then generally provides the GOR, program officer, or both to supply program-specific expertise. As of May 2014, there were 571 GOs worldwide, with 503 of them based overseas. Most GOs at posts are Foreign Service Officers with multiple other duties. In addition, Foreign Service Officers usually rotate to another post within 1-3 years, per State’s normal operating procedures. This considerable turnover rate means that a single grant may have multiple GOs over its life cycle. State’s grants generally follow a life cycle that consists of four phases— preaward, award, postaward, and closeout. In the preaward phase, grants officials develop the program idea, evaluate proposals, and select a recipient. The GO then negotiates the costs of the grant with the recipient and drafts the award document during the award phase. In the postaward phase, grants officials monitor the recipient’s progress and disburse payments as appropriate. Finally, in the closeout phase, grants officials assess final programmatic and financial reports and determine any final payments or reimbursements that are necessary. As we have noted in prior reports, effective oversight and internal control are important when awarding and managing federal grants to provide reasonable assurance to federal managers and taxpayers that grants are awarded properly, recipients are eligible, and federal grant funds are used as intended and in accordance with applicable laws and regulations. The Standards for Internal Control in the Federal Government (the federal standards) sets forth the standards that provide the overall framework for establishing and maintaining internal control and for identifying and addressing major performance and management challenges and areas at greatest risk of fraud, waste, abuse, and mismanagement. Control environment: Management and employees should establish and maintain an environment that sets a positive and supportive attitude toward internal control and conscientious management. A positive control environment is the foundation for all other standards. Risk assessment: Internal control should provide for an assessment of the risks the agency faces from both external and internal sources. Risk assessment is the identification and analysis of relevant risks associated with achieving the agency’s objectives and forming a basis for determining how risks should be managed. Control activities: Internal control activities help ensure that management’s directives are carried out. Control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives. They help ensure that actions are taken to address risks, and they are integral to the stewardship of government resources and achieving effective results. Information and communications: Information should be recorded and communicated to management in a form and within a time frame that enables management to carry out its internal control and other responsibilities. Monitoring: Internal control monitoring should assess the quality of performance over time and ensure that any issues are promptly resolved. State has established a core set of policies and guidance incorporating federal regulations for administering and overseeing grants. A/OPE has established policies and training to further assist grants officials as they implement the department’s policies and to reduce wasteful spending in government. A/OPE has taken steps to improve its policies, revising many of them since their issuance. State has provided these policies, as well as training and other support, to staff to encourage effective grant management throughout the life cycle of a grant. State’s grant management policies incorporate requirements established in federal regulations and guidance, which are codified in the Code of Federal Regulations (CFR). These regulations are based on Office of Management and Budget (OMB) circulars on grants and cooperative agreements with nongovernmental organizations and institutions of higher education as well as with state and local governments. OMB circulars provide guidance to grants officials for implementing rules regarding allowable costs, program purposes, and financial management procedures. In addition, the authorities for specific assistance programs may provide requirements for associated grants. OMB guidance and regulations contained in the CFR inform State’s policies for grants management. State collects and articulates the department’s policies in the Foreign Affairs Manual and its associated handbooks. This manual assigns A/OPE the authority to prescribe acquisition and assistance policies, regulations and procedures for State. State officials told us that A/OPE also works closely with the Office of the Deputy Chief Financial Officer to develop policies related to the financial management of grants. Since 1992, A/OPE has issued 59 grants policy directives to provide additional guidance specific to State’s staff explaining how they should conduct grants management in accordance with federal regulations. A/OPE has issued or revised more than half of these policies since 2008, creating at least two of them in response to concerns from its Inspector General and us about State’s internal controls for grants management. State’s policies and guidance help establish a control environment framework for grants management. Two of the directives related to risk assessment, for example, directly cite the federal standards, which call on federal agencies to identify risks as part of a positive internal control environment. State’s policies also provide guidance for implementing key internal control activities throughout the life cycle of a grant, such as approval of the monitoring plan or review of quarterly or annual reports (see fig. 1). State’s control environment also includes a variety of additional guidance, including mandatory training, “best practices” dissemination, and online resources. Training: State offers its staff several grants management courses, covering such topics as monitoring and evaluation, cost principles, and ethics. GOs must take at least 24 hours of grants management training, and GORs must take both an introduction and a monitoring course to obtain certification. Both must update their training with at least 16 hours of courses every 3 years. Some courses are also available online, and A/OPE has worked with several bureaus to hold training focused on their specific needs. In addition, A/OPE occasionally offers regional or post training in the field, depending on resources, and has recently begun holding webinars to train and answer questions from grants officials overseas. Best practices dissemination: According to State officials, A/OPE began holding quarterly meetings in 2004 for grants officials throughout the department to both raise issues and share best practices. More recently, A/OPE has begun to offer a 2-day course twice a year in lieu of the quarterly meetings. The course offers an update on trends and regulation changes as well as a refresher on grants administration and policy. Online resources: Finally, State has a number of Intranet resources available for grants officials. Beyond distance learning courses, A/OPE also has sample templates for a variety of grant documentation, including a preaward survey, financial management survey, risk assessment tool, and several types of monitoring reports. An A/OPE official told us A/OPE is currently updating the Federal Assistance Policy Manual, which will provide additional guidance for the entire grant life cycle. Furthermore, bureaus and posts are allowed to design guidance appropriate to the varying circumstances surrounding their grants. For example, three of the bureaus and two of the posts in our sample have developed their own risk assessment checklists. State guidance also directs grants officials to document key internal control activities throughout the life cycle of a grant, including the use of funds, the recipient’s progress, and the grants officials’ assessment of that progress. State has several policies either dedicated to documentation or requiring documentation. For example, one policy describes the roles and responsibilities for both the GOs and the GORs, listing what information they must document as well as where and at what phase in the grant life cycle they should document it. For certain internal control activities, State has created additional policies with detailed documentation and reporting requirements. The policies cover topics such as competition versus sole source decisions, risk identification and assessment processes, and developing monitoring plans. These policies also include sample templates outlining various approaches to documenting these activities. Under State guidance, bureaus and posts may tailor these policies to their specific needs. Given the considerable turnover rate of GOs who are Foreign Service Officers, as well as the fact that grants management is not often their primary task, a strong internal controls environment is essential for accountability. State has not consistently implemented the risk analysis and documentation of internal controls required by grants management policies and guidance, a fact that weakens assurance that grant funds are used as intended. In particular, grants officials have not adhered consistently to State’s policies about identifying, assessing, and mitigating risks associated with the grants we reviewed. Furthermore, grants officials do not always document the implementation of key internal controls activities as required. State has established procedures for assessing grants officials’ implementation of its internal controls. In conducting these reviews, A/OPE found insufficient documentation in the grant files at all 10 of the bureaus and posts that were also in our review and recommended solutions, but did not systematically follow up to ensure that the bureaus and posts had implemented them. Grants officials responsible for the files in our sample often did not adhere to State’s policies on risk assessment. Federal standards define risk assessment as “the identification and analysis of relevant risks associated with achieving the objectives.” A risk analysis helps ensure that grants officials undertake the necessary control activities and use oversight resources appropriately. State’s policy on risk management further elaborates that risk assessment should begin in the preaward phase and continue throughout the grant life cycle. Furthermore, it states that a risk management plan must include identification, assessment, and monitoring and mitigation of risk. In most of the files we reviewed, however, we did not find evidence that grants officials had fulfilled these requirements, as described below. Risk identification: State’s policies require grants officials to carry out a comprehensive review of potential recipients to identify risks. Risk factors could include a lack of stable financial infrastructure or experience in managing a U.S. government grant, past performance problems, an unusual or difficult environment, responsibility for a large amount of funds, and concern the organization might be involved in terrorist activities. Of the 61 grant files we reviewed, 45 showed that grants officials had at least partially undertaken a risk identification process. However, 33 of these were missing key elements of a risk identification process, such as a review of the recipient’s financial systems and internal controls. Risk assessment: State policy requires grants officials to exercise a greater level of oversight for high-risk versus low-risk grants; however, we found in our file reviews that grants officials often did not assess identified risks to determine the grants’ risk level. Of the 45 grants that underwent a risk identification process, 28 had risks identified. However, only 15 of those 28 grants that identified risks also included at least a partial assessment of risks, such as a preaward checklist at one post that included some questions about prior federal grant management experience and other past performance issues. In addition, while State has established a variety of guidance on risk assessment, the wording of the guidance is not consistent in certain aspects, such as assessment of external risk factors. For example, while one of the grants policy directives lists Transparency International’s Corruption Perception Index as a risk assessment resource, none of A/OPE’s sample risk assessment templates mention the index or corruption in general. In another example, we found that a grant to support civil society had received a “low-risk” determination, based on calculations from a risk assessment checklist, although grants officials remarked in the notes section of the checklist that corruption was rampant in that country. Since corruption was not an element of the checklist itself, it did not factor into the overall risk-level determination—nor was it reflected anywhere else in the grant file documentation. Risk mitigation: State policies require grants officials to document how they will mitigate, or address, identified risks, including by preparing and implementing a monitoring plan. Of the 15 grants that showed both identification and assessment of risks, 11 showed at least partial mitigation of those risks (see fig. 2). Of the remaining 4 grants that showed no risk mitigation, 1 included an audit report identifying significant financial management risks, such as noncompliance with allowable costs, on the part of the grant recipient. The grants officials, however, did not reflect these risks in a risk mitigation or monitoring plan. In two other grant files we reviewed, grants officials identified allegations of prior financial mismanagement, but awarded the grant without addressing how the risk would be mitigated. Furthermore, State’s various risk-related guidance does not clearly emphasize the importance of linking risk assessment to monitoring. Federal standards state that risk analysis generally includes deciding how to manage the risk and what actions should be taken. Of the five sample monitoring templates that A/OPE has developed, however, three do not mention risk. Grants officials who use these templates, therefore, may not make the link between monitoring and risk. For example, of the 13 grants from two bureaus in our sample that had developed their own risk assessment checklists based on A/OPE’s templates, we found that 7 did not reflect risks identified in a monitoring plan. Moreover, of the 23 grants in our sample overall that had assessed risks and had a monitoring plan, only 10 reflected the risks in the plan. Without identifying and assessing risks, it may be difficult for State to determine whether there are any observable impediments to the recipient’s effective management of State funds that would need to be mitigated. Determining which grants warrant greater oversight and which require less helps managers ensure the appropriate allocation of resources for safeguarding grant funds. Grants officials in the sample of grants we reviewed generally did not adhere to State policies and procedures relating to documenting control activities. Federal standards call for the clear, prompt, and accurate documentation of internal control and all other significant events, including risk assessments and monitoring activities, and state that this documentation should be readily available for examination. State guidance directs grants officials to document key internal control activities throughout the life cycle of a grant, including the use of funds, the recipient’s progress, and the grants officials’ assessment of that progress (see fig. 3). To support officials as they implement this guidance, State has created multiple systems for organizing, retaining, and sharing information about grants, whether the files are in electronic or hard copy form. Documenting grant management activities is particularly important because of grants officials’ considerable turnover rate, which can leave newly assigned grants officials dependent on files to determine what control activities are required and which have been conducted. However, the grant files we reviewed did not consistently document key grant activities as required, to demonstrate that internal controls had been implemented. In many of the files we reviewed, required documents were missing from the official file or incomplete, as described below. Internal control checklist: State requires each grant file to contain a checklist, which grants officials are supposed to use to document the completion of many internal control activities throughout the grant’s life cycle. These activities include vetting recipients, identifying the amount of funds and project duration, identifying key contacts at State and the recipient organization, and tracking receipt of information from the recipient about progress made and costs incurred. State’s training further emphasizes the importance of keeping all sections of this checklist current to assist managers in monitoring grants and ensure that they fulfill the U.S. government’s obligation to grant recipients. However, 17 out of the 61 files we reviewed did not contain this checklist, and only about 11 percent (5 out of 44) of the forms that did exist were completely filled out. Grant award justification: State requires grant files to contain a justification for awarding any grants without a full and open competition. However, for the 24 grants in our sample that were sole-sourced, 7 did not contain this justification. Monitoring plans: State policies require grants officials to prepare a monitoring plan to measure the recipient’s progress toward achieving the grant’s goals and objectives and ensure that the recipient complies with the grant agreement. The grants officials must use the plan to indicate the type and frequency of monitoring they will conduct given the risks involved and the resources available for these activities. However, about half of the files we reviewed (32 of the 61) did not contain a monitoring plan. Furthermore, some of the 29 plans that were documented did not address key aspects of monitoring the terms and conditions of the grant agreement, such as risk analysis and evaluation. For example, 3 of the 29 documented plans we reviewed did not describe how the GO planned to monitor progress toward the grant’s specific goals and objectives. Monitoring activities conducted: While most of the files contained evidence of some monitoring activities, such as e-mail communication with recipients, this monitoring did not completely adhere to State’s guidance. Of the 29 documented monitoring plans, the evidence showed that grants officials had fully executed 8 of them and partially executed an additional 10. For example, some of the partially executed plans indicated that the grants officials were to conduct site visits and review recipient reports, but the grant files showed that the grants officials had closed the grants without documenting any evidence of these planned monitoring activities. Regardless of whether grants had monitoring plans, we found that only 16 of the 61 files contained evidence that the grants officials had completed the required review of all the recipient’s financial and programmatic reports to monitor for key information—including verifying timely progress toward the goals, as well as identifying and addressing any delays or inappropriate expenditures. Another 24 of the 61 files contained evidence that grants officials had done partial reviews of these reports. These reviews, as well as reviews of the final recipient reports, are required to close an award, but we found four awards that GOs had closed without evidence that they had reviewed the recipient’s final reports. On more than one occasion, State had difficulty providing required internal control documentation to us, either because a grants official was on leave or had moved on to another post. For example, we identified incomplete documentation in some award files at one post, including missing final financial and programmatic reports. Three months after our site visit, the GO still could not produce the documents, because the grants management specialist whose computer’s hard drive contained the documents was on extended leave. Two other GOs we interviewed said the files they inherited did not contain required documents, such as the approval forms from bureau headquarters for awards exceeding $25,000. Without documentation such as the internal control checklist, the grant award justification, monitoring plans, and monitoring reports, State cannot provide adequate oversight to ensure grant funds are being used as intended. Without the required checklist, for example, managers cannot readily ensure that the required documentation supporting the management of each grant is present and complete. State also cannot determine whether the GO’s decision not to award a grant through open competition was justifiable according to State’s guidance unless the GO documents that decision. Furthermore, while grants officials told us they do conduct monitoring, absent adequate monitoring plans and reports, it is difficult for managers to determine whether grants officials are effectively allocating resources or conducting monitoring, or that a grant accomplished its intended goals. GOs and GORs we interviewed cited a variety of reasons for not conducting the required risk analysis and documentation, including a misunderstanding of State policies and guidance, a heavy workload, and a lack of staff expertise. For example, several of these grants officials told us they did not do a risk analysis either because the recipient was well known or they knew intuitively that the risk level was low—particularly if the grant was relatively small or funded a short-term project such as a 2- day photography exhibit. State policies, however, do not preclude well- known organizations from risk analysis. Moreover, we found an example where the GO assumed the grant was low-risk without doing a full risk analysis, even though it was the first time that State had awarded a grant to the recipient. The grant files contained evidence that as the grant progressed, the recipient had a variety of performance issues, including trying to use the grant funds for activities other than what was intended. The amount of time and resources dedicated to an appropriate risk analysis may vary for lower dollar value grants, well-known recipients, or other factors; nonetheless, according to an A/OPE official, a risk analysis is still required in these circumstances. State, however, does not have a process in place for ensuring that grants officials conduct a risk analysis. According to State officials and the files we reviewed, multiple responsibilities and large portfolios limited the oversight they conducted to ensure that required documentation was in place. Grants officials reported having high numbers of grants to manage and multiple responsibilities beyond managing grants. At bureaus with GOs based in Washington, D.C., workloads tended to be higher, according to an A/OPE official. This official reported that the average number of grant transactions undertaken by these domestic GOs in fiscal year 2013 was 46, and two Washington, D.C.-based GOs we interviewed reported having portfolios of 110 and 200 active grants, respectively. At two posts we visited, the GOs were responsible for managing portfolios of about 56 to 66 grants in fiscal year 2012, in addition to their primary responsibilities as public affairs officers for the embassies. Both GOs reported relying on GORs and others to complete the award documentation. They reported conducting occasional spot checks of the files compiled by the GORs. At both posts, however, we found that each grant file in our sample managed by these GOs was missing required internal control documents. Furthermore, we brought oversight issues to the attention of the GOs of which they had previously been unaware, such as unexpended funds from a closed-out grant. Some grants officials told us that State’s systems for documenting key management activities are not easy to use. A/OPE officials told us they are working to improve the main electronic management systems used by grants officials (the Grants Database Management System and the State Assistance Management System), but the accuracy of some information will still depend on the person inputting it, and State does not have a process for ensuring that all of the required documentation is included. Three grants officials we interviewed reported that interns and grant assistants update the paper and electronic files and that the grants officials do not check their work. In addition, some grants officials told us that while official files might be missing required documentation, the information was stored elsewhere and could be retrieved upon request. While some grants officials eventually produced the requested information, others did not. Some grants officials told us that they kept information about recipient performance on their computer’s hard drive, a fact that may limit other grants officials’ access to this information. Furthermore, the official files did not indicate where this information could be found. State has assessed the implementation of internal controls at several bureaus and posts and has recommended improvements; however, it has not followed up to ensure the implementation of its recommendations. According to the federal standards, successful monitoring should include policies and procedures for regularly assessing the effectiveness of the internal controls in place and for ensuring that the findings of audits and other reviews are promptly resolved. In 2008, A/OPE issued a policy to systematically review grants management at posts and bureaus. The policy stated that the number and extent of the reviews conducted each year would be dependent on A/OPE’s available resources. monitoring should assess the quality of performance over time and ensure that any issues are promptly resolved. A/OPE has assessed compliance with grant management policies at some bureaus in Washington, D.C., and overseas posts and found some deficiencies in grants officials’ implementation of those policies. In 2008, State created a Grants Management Review (GMR) program with guidelines and a checklist for reviews. The bureaus and posts are to be selected for review based on weaknesses identified by the Inspector General, dollar amount and volume of grants processed, informal risk assessments, public visibility of the grants, and bureau or post requests to be reviewed. In addition to the GMRs and other reviews of grant- making bureaus headquartered in Washington, D.C., A/OPE has conducted less formal reviews at grant-making posts overseas, combining file reviews with training. These reviews—called Grants Review Evaluation and Assistance Trainings (GREAT)—are initiated in response to requests for training or when A/OPE becomes aware of challenges faced at certain posts, including considerable turnover. According to officials, A/OPE tries to select posts for review if they are located at or near destinations where A/OPE staff have other reasons to travel, so as to conserve resources. A/OPE officials said their method for conducting GREATs was similar to that used for GMRs, but less in-depth. As a result, unlike the formal GMRs—which State officials said can take 2 to 3 months to conduct—these informal GREATs take 2 to 3 days and result in shorter reports. Between 2001 and February 2014, A/OPE completed 13 GMRs and other reviews at 12 bureaus and offices headquartered in Washington, D.C., and 42 GREATs at 37 posts around the world. These assessments of grants officials’ implementation of grants management policy at bureaus and posts have found insufficient documentation, among other deficiencies, and recommended solutions. Specifically, 52 of the 55 reviews found insufficient documentation in grant files, affecting 47 bureaus and posts. Ten of the bureaus and posts where A/OPE had conducted GMRs, GREATs, and other reviews were included in our review samples. At all 10 bureaus and posts, State found challenges with documentation similar to those we identified. For example, in 5 of the 10 bureaus and posts, A/OPE found that grants officials did not consistently document monitoring reports or site visit reports, and in one case the post had no monitoring plans in its award files. A/OPE found insufficient documentation in the other 4 bureaus as well, such as incomplete internal control checklists or inadequate documentation of sole source justification. One bureau in Washington, D.C., kept its monitoring reports on an electronic shared drive, but the award files did not indicate where to find the reports. A/OPE has made many recommendations to bureaus and posts aimed at correcting the deficiencies it identified. These recommendations have included the establishment of standard operating procedures and more effective use of electronic systems for documentation, among others. However, State has not systematically followed up to ensure the implementation of these recommendations. State’s system for tracking compliance with its grants management policies has yielded recommendations for improvement at 48 of the 49 bureaus and posts it has reviewed, but State has conducted follow-up with only 6 of those 48 bureaus and posts to ensure that these recommendations were implemented. State’s policy for GMRs outlines how State will select and conduct a review, but does not provide a procedure for ensuring that corrective action has been taken and no further management action is needed. However, A/OPE officials report that they have limited staff, with currently only one individual leading both the GMR and GREAT reviews. In addition, they report having a limited travel budget for conducting follow-up reviews. A senior A/OPE official said it is necessary to be vigilant about regularly sending messages regarding important grant- related tasks, particularly when there is considerable turnover among personnel at posts. Because State does not track or report on the implementation of recommendations, State cannot determine whether its grants management reviews and training are achieving their purpose of strengthening the management and oversight of assistance agreements. Given the relatively large amount of funding for grants and the widespread use of these instruments to achieve foreign policy goals, it is important for State to ensure that grant funds are used as intended. State has made progress toward establishing the internal controls it needs to gain this assurance. For example, State has outlined its expectations for grants management in detailed policies and guidance that should be clear for all grants officials. In particular, the requirements to conduct a risk analysis and document the implementation of required control activities conform to the federal standards for internal control. However, we found that most of the grant files we reviewed did not contain evidence of an appropriate risk analysis or were missing other required internal control documentation, and State has not developed processes for ensuring that grants officials implement these requirements. Therefore, State’s assurance that grant funds are used as intended is weakened. Recommendations made in State’s internal reviews of grant-making practices reinforce expectations concerning documentation. However, State management does not systematically follow up to ensure that grants officials throughout the department consistently implement these required control activities or act upon recommendations made. As a result, State cannot be certain that its oversight of grants management is adequate or that it is using its limited oversight resources effectively. To help ensure that State’s grants officials fully implement grants management policies and internal controls that are in place, and that grant funds are used as intended, we recommend that the Secretary of State take the following two actions: Develop processes to help ensure that bureaus and missions conduct appropriate risk assessments, and grants officials complete required documentation for all grants. Such a process could include systematic inspections of grant files, with the results shared among A/OPE, the appropriate bureaus and missions, and the grants officials themselves, so as to promote accountability. In addition, we recommend that the Secretary of State take the following action: follow up systematically on recommendations from State’s internal reviews of its grants management. We provided a draft of this report to State for its review and comment. State provided written comments, which we have reprinted in appendix III, as well as technical comments, which we incorporated, as appropriate. State provided additional information about its efforts to establish policies and guidance to provide a supportive environment for administering and overseeing grants. In particular, State noted that, in addition to the policies and guidance described in our report, the department has implemented processes regarding grant-making authority and certification for grants officials and has cooperated with the Office of Management and Budget on governmentwide foreign assistance management issues. State concurred with our recommendations to develop processes for ensuring that bureaus and missions conduct appropriate risk assessments and that grants officials complete required documentation. Specifically, State indicated that it will modify risk assessment guidance to include suggestions from our report, provide additional training focused on risk assessment, and specifically evaluate compliance with risk assessment requirements in State’s own assessments of internal controls at bureaus and posts. State also indicated that it will increase the emphasis on file documentation and expand the extent of file reviews during these assessments at bureaus and posts to help ensure that grants officials complete required documentation for all grants. In addition, State concurred with our recommendation to follow up systematically on recommendations from State’s internal reviews of its grants management. Specifically, it said that it will require formal responses to recommendations from its grant management assessments at bureaus and posts, to include recommendation implementation status updates. We are sending copies of this report to the appropriate congressional committee and the Secretary of State. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or GootnickD@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to examine (1) the policies and guidance that the Department of State (State) has established to administer and oversee grants and cooperative agreements, and (2) the extent to which the implementation of those policies and guidance provides reasonable assurance that funds are being used as intended. To define grants and cooperative agreements (grants) and to describe the roles and responsibilities of those involved in managing State’s grants and the key activities grants officials must conduct during each phase, we reviewed State’s grants policy directives (GPD) and State’s required grants officials training, and interviewed State officials. To describe the internal control standards applicable to State grants management, we reviewed the Standards for Internal Control in the Federal Government (the federal standards). To describe the policies and guidance that State has established to administer and oversee grants, we reviewed federal regulations, Office of Management and Budget (OMB) circulars, and State’s GPDs, as well as training and other resources the department provides its grants officials. We collected and analyzed all 54 active GPDs issued by State’s Office of the Procurement Executive (A/OPE). We reviewed State Department training regarding State’s incorporation of federal regulations into its grants policies and interviewed A/OPE officials regarding this topic. We interviewed officials from A/OPE and the Office of the Deputy Chief Financial Officer, who set department-wide policies on grant performance and financial management, and bureau and post officials regarding any additional policies and guidance they provide that is specific to the programs they manage. To describe the extent to which State’s GPDs establish internal controls throughout the phases of a grant’s life cycle, we compared the key activities the grants officials must conduct at each phase, as well as the additional training and guidance A/OPE provides to grants officials, with the federal standards and with federal regulations for grants management. To assess the extent to which State’s policies and guidance provide reasonable assurance that funds are being used as intended, we assessed State’s policies and guidance for grants management against the implementation of those policies, as well as reviewing State’s own process for conducting internal assessments of the implementation of those policies. We interviewed State officials from A/OPE, the Office of the Deputy Chief Financial Officer, and various functional and regional bureaus and offices headquartered in Washington, D.C., and three overseas missions to determine how State both designs and implements department-wide internal control policies on grants performance and financial management. To further determine how State implements these policies, we selected three case study countries —Afghanistan, Cambodia, and Turkey— based on criteria that included total dollar value of grants in a country, geographic diversity, and balance among the bureaus involved in managing the awards. For these countries we examined a nongeneralizable sample of 48 grants by award size and bureau that had obligations in fiscal year 2012. In addition, we used the same criteria to draw another nongeneralizable sample of 13 grants managed in the United States, Washington, D.C. Overall, the 61 grants we reviewed ranged in value from just over $25,000 to $28,000,000, and totaled approximately $172 million. To arrive at these 61 grants, we included all grants over $25,000 with obligations in fiscal year 2012 in Cambodia and Turkey, which had 15 and 21 such grants respectively, and selected 20 such grants managed in the other two countries, for a total of 76 files. We excluded grants where multiple places of performance were listed and selected only those grants where our four countries were listed as the only place of performance. However, once we began conducting interviews regarding these grants, we discovered that some of the grants had been mislabeled in State’s Grants Database Management System. Specifically, 2 were managed in countries other than the ones listed in the database; another grant was listed in the database as being managed in Cambodia alone, but was in fact managed from Washington, D.C., and implemented worldwide; and a fourth grant was a duplicate of a grant already in our sample. Correcting for these errors and adjustments, our sample was reduced to 72 grants. Three bureaus each managed 10 or more of these, accounting for 39 of the 72 grants. We determined that we had obtained sufficient coverage of grants from the three bureaus in question by reviewing 28 of their 39 grants. This determination further reduced our overall sample size from 72 to 61 grants. The final sample of 61 grants we conducted file reviews for included 19 in Afghanistan, 10 in Cambodia, 19 in Turkey, and 13 in the United States. Collectively, the grants in our samples were managed by 3 posts and 14 bureaus of the 27 grant-making bureaus and offices in State. Our sample included grants by 6 of the top 10 bureaus and posts in terms of State’s total federal financial assistance obligations in fiscal year 2012. Our sample was nongeneralizable and did not allow us to determine whether there were any statistically significant differences by factors such as bureau or award size. In this report, we presented the overall results of the data on internal control activities for all 61 grants that we selected for data collection instrument review. During our analysis, we also looked for any overall patterns or differences by bureau and award size in terms of dollar value, but did not note any. We identified examples of concerns about controls that we reported on, such as incomplete documentation and absence of a risk assessment, in awards with high and low dollar values across the bureaus in our sample. To examine how grants officials implemented grants management policies, we conducted file reviews for 61 grants using a data collection instrument and interviews regarding those 61 grants using a standard set of questions. We developed the data collection instrument to assess grants officials’ implementation of a selection of State’s required internal control activities for grants management, including risk assessment, monitoring of recipients, and documentation of key activities. To identify these required internal control activities, we analyzed federal regulations and State policies and guidance related to management of grants across their life cycle and compared them against the federal standards. The data collection instrument asked a series of questions about critical elements of these internal control activities. During our reviews, we examined the files that State provided for each grant to determine whether they contained sufficient evidence to indicate whether the control activities had been performed. Our file review guidance indicated that while State policy requires the complete official file to be located in at least one place, analysts completing the reviews were directed to contact grants officials responsible for the files to request missing information where possible. Comments sections in the data collection instrument allowed the reviewers to document these requests and other notes they had about each question in the file. Two analysts reviewed each file, with the second analyst verifying the first analyst’s review. The focus of our analysis was the degree to which the grants officials had performed the required control activities for the grants in our sample. As noted above, we also considered whether there were any patterns or differences by award size in dollars or by bureau, but did not find any. In addition, the interview questions asked grants officials to identify training and guidance they found helpful, as well as any challenges they encountered as they implemented these policies. We pretested both the data collection instrument and interview questions in Kabul, and tested revised versions in Washington, D.C. Our file review and interview questions covered each phase of the grant life cycle, from preaward to closeout. We then compared the file review results for control activities against responses collected using the standard interview questions to provide a more complete picture of how the grants officials implemented the policies and identify reasons these officials cited for instances of noncompliance, as well as any State policies and guidance they found helpful. We conducted interviews using the standard set of questions with at least one grants officer (GO) for each of the 61 grants in our sample, ultimately interviewing more than 50 GOs and grants officer representatives (GOR), as well as some program officers. To identify any internal control deficiency patterns or trends, we analyzed and compared the data we collected from our file reviews and interviews and also compared them to the findings from State’s internal inspections of grant-making operations. To describe State’s assessment of grants officials’ implementation of its internal controls, we reviewed State’s guidance for conducting these reviews, interviewed A/OPE officials regarding the review process, and reviewed 55 assessment reports A/OPE issued between 2001 and February 2014. These 55 included the assessment reports A/OPE conducted in 12 grant-making bureaus, including 10 Grants Management Reviews and 3 other reviews, and reports from the 42 Grants Review Evaluation and Assistance Trainings A/OPE conducted at 37 posts around the world. Of the 55 assessment reports, 10 covered 10 of the 17 bureaus and posts included in our review. We analyzed those 10 reports to identify any findings related to the federal standards. We also reviewed all 55 reports to identify any instances where A/OPE either made recommendations and then returned to the bureaus and posts to follow up, or documented receiving recommendation implementation progress reports from those bureaus and posts. We conducted this performance audit from May 2013 to July 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We analyzed State’s grants policy directives (GPD) as well as its own grants management reviews, and we conducted file reviews for 61 grants in our sample to help determine both the policies State established for internal controls in grants management as well as how grants officials implemented those policies. To conduct the file reviews, we developed a data collection instrument that focused on basic documentation, decisions about competitive selections, risk management approach, monitoring approach, and closeout activities. For a full description of the methodology for the file reviews, see appendix I. The following tables reflect our overall analysis of State’s policies and management reviews as well as selected results from our file reviews that correspond to the five internal control standards of control environment, risk assessment, control activities, information and communication, and monitoring. We analyzed A/OPE’s GPDs against each internal control standard to describe the control environment (see table 2). For risk assessment, control activities, and information and communications, we selected results from our 61 file reviews using a data collection instrument we developed to analyze these three internal control standards in particular (see tables 3-8). Finally, for monitoring, we analyzed A/OPE’s findings from its internal reviews assessing the implementation of the internal control activities at eight bureaus and two posts included in our sample (see table 9). We selected these results because they add additional detail to the information summarized in the report. As shown in Table 2, we analyzed all 54 of A/OPE’s active GPDs against each internal control standard to help describe the control environment. To determine the extent to which overall risk management was conducted for each grant, each file reviewer first answered a series of questions in our data collection instrument about key elements of risk management, including the risk identification process, review of financial and systems controls, consideration of external risks, risk identification, risk assessment, and risk mitigation. On the basis of the responses to those questions, the reviewers made a final determination on overall risk management (see table 3). To determine the extent to which overall monitoring (see table 4) and closeout activities (see table 5) were conducted for each grant, each file reviewer first answered a series of questions in our data collection instrument about key elements of monitoring and closeout, including whether there was a monitoring plan, and if so, whether it reflected risks, whether monitoring mechanisms were described in the plan, whether the plan had been executed, and whether monitoring had been carried out in the absence of a plan. On the basis of the responses to those questions, the reviewers made a final determination about overall monitoring. The data collection instrument we used to conduct the 61 file reviews also contained questions related to information and communications. For example, it contained questions and statements to verify whether required documentation was present and if so, whether it was complete. The documentation we looked for included the required internal control checklist, or DS-4012 (see table 6); overall documentation of competitive selection decisions (see table 7); and the required justification for awarding any grants without a full and open competition (sole-sourcing) (see table 8). To determine the extent to which overall competitive selection decisions were justified for each grant, each file reviewer first answered a series of questions about key elements of that justification, including—but not limited to—whether the award was sole-sourced, and if so, whether the decision to sole-source was justified in writing. On the basis of the responses to those and other questions, the reviewers made a final determination about the overall documentation of justification of competitive selection decisions (see table 7). A/OPE has assessed compliance with grant management policies at some bureaus in Washington, D.C., and overseas posts using its Grants Management Reviews (GMR) and Grants Review Evaluation and Assistance Trainings (GREAT) and other reviews. The 55 reviews State conducted between 2001 and February 2014 found some deficiencies in grants officials’ implementation of State’s grant management policies. Eight of the bureaus and two of the posts where State conducted there reviews were included in our sample of grants. We analyzed those 10 reports to identify any findings related to the federal standards (see table 9). In addition to the contact named above, James B. Michels (Assistant Director), Judith Williams, Katherine Forsyth, Jacob Beier, Debbie Chung, Oliver Culley, Martin De Alteriis, Jon Fremont, Farhanaz Kermalli, Anne McDonough-Hughes, Kimberly McGatlin, Shakira O’Neil, and Justin Fisher made key contributions to this report. Etana Finkler, Ernie Jackson, Julia Jebo Grant, and Cristina Ruggiero provided technical assistance. Grant Workforce: Agency Training Practices Should Inform Future Government-wide Efforts. GAO-13-591. Washington, D.C.: June 28, 2013. Cuba Democracy Assistance: USAID’s Program Is Improved, but State Could Better Monitor Its Implementing Partners. GAO-13-285. Washington, D.C.: January 25, 2013. Grants to State and Local Governments: An Overview of Federal Funding Levels and Selected Challenges. GAO-12-1016. Washington, D.C.: September 25, 2012. Grants Management: Action Needed to Improve the Timeliness of Grant Closeouts by Federal Agencies. GAO-12-360. Washington, D.C.: April 16, 2012. Improper Payments: Remaining Challenges and Strategies for Government-wide Reduction Efforts. GAO-12-573T. Washington, D.C.: March 28, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Federal Grants: Improvements Needed in Oversight and Accountability Processes. GAO-11-773T. Washington, D.C.: June 23, 2011. Grants.gov: Additional Action Needed to Address Persistent Governance and Funding Challenges. GAO-11-478. Washington, D.C.: May 6, 2011. Government Performance: GPRA Modernization Act Provides Opportunities to Help Address Fiscal, Performance, and Management Challenges. GAO-11-466T. Washington, D.C.: March 16, 2011. Iraq and Afghanistan: DOD, State, and USAID Face Continued Challenges in Tracking Contracts, Assistance Instruments, and Associated Personnel. GAO-11-1. Washington, D.C.: October 1, 2010. Contingency Contracting: Improvements Needed in Management of Contractors Supporting Contract and Grant Administration in Iraq and Afghanistan, GAO-10-357. Washington, D.C.: April 12, 2010. Iraq and Afghanistan: Agencies Face Challenges in Tracking Contracts, Grants, Cooperative Agreements, and Associated Personnel. GAO-10-509T. Washington, D.C.: March 23, 2010. Electronic Government: Implementation of the Federal Funding Accountability and Transparency Act of 2006. GAO-10-365. Washington, D.C.: March 12, 2010. Grants Management: Grants.gov Has Systemic Weaknesses That Require Attention. GAO-09-589. Washington, D.C.: July 15, 2009. Single Audit: Opportunities Exist to Improve the Single Audit Process and Oversight. GAO-09-307R. Washington, D.C.: March 13, 2009. Grants Management: Attention Needed to Address Undisbursed Balances in Expired Grant Accounts. GAO-08-432. Washington, D.C.: August 29, 2008. Department of State: Human Capital Strategy Does Not Recognize Foreign Assistance Responsibilities. GAO-07-1153. Washington, D.C: September 28, 2007. Foreign Assistance: U.S. Democracy Assistance for Cuba Needs Better Management and Oversight. GAO-07-147. Washington, D.C.: November 15, 2006. Grants Management: Enhancing Performance Accountability Provisions Could Lead to Better Results. GAO-06-1046. Washington, D.C.: September 29, 2006. Grants Management: Grantees’ Concerns with Efforts to Streamline and Simplify Processes. GAO-06-566. Washington, D.C.: July 28, 2006. Auditing and Financial Management: Standards for Internal Control in the Federal Government. GAO/AIMD-00-21.3.1. Washington, D.C.: November 1, 1999. | Grants are key tools that State uses to conduct foreign assistance. In fiscal year 2012, State obligated over $1.6 billion worldwide for around 14,000 grants to individuals and organizations for a variety of purposes, such as fostering cultural exchange and facilitating refugee resettlement. However, recent GAO and Inspectors General reports have identified challenges with State's management of these funds. This report examines (1) the policies and guidance that State has established to administer and oversee grants, and (2) the extent to which the implementation of those policies and guidance provides reasonable assurance that funds are being used as intended. GAO analyzed State's policies and guidance, and interviewed cognizant grants officials at 14 bureaus headquartered in Washington, D.C., and three overseas missions (Afghanistan, Cambodia, and Turkey). GAO also conducted file reviews for a sample of 61 grants totaling approximately $172 million. Selection criteria included total dollar value of grants in a country, geographic diversity, and balance among bureaus. The Department of State (State) has established policies and guidance that provide a supportive environment for managing grants and cooperative agreements (grants). In addition, State provides its grants officials mandatory training on these policies and guidance, and routinely identifies and shares best practices. State's policies are based on federal regulations, reflect internal control standards, and cover topics such as risk assessment and monitoring procedures. State's policies also delineate specific internal control activities that grants officials are required to both implement and document in the grant files as a way of promoting accountability (see fig.). GAO found that inconsistent implementation of policies and guidance weakens State's assurance that grant funds are used as intended. Inadequate risk analysis . In most of the files GAO reviewed, grants officials did not fully identify, assess, and mitigate risks, as required. For example, officials conducted a risk identification process for 45 of the 61 grants that GAO reviewed. While grants officials identified risk in 28 of those 45 grants, they mitigated risks in only 11. Poor documentation . Grants officials generally did not adhere to State policies and procedures relating to documenting internal control activities. For example, 32 of the 61 files reviewed did not contain the required monitoring plan. Considerable turnover among grants officials makes documenting internal control activities particularly important. State's periodic management reviews of selected bureaus' and overseas missions' grant operations have also found that key documentation was frequently missing or incomplete and made recommendations to address the problem. However, State has not consistently followed up to ensure the implementation of these recommendations, as internal control standards require. State does not have processes for ensuring compliance with risk analysis and documentation requirements. Without the proper implementation of its internal control policies for grants management, State cannot be certain that its oversight is adequate or that it is using its limited oversight resources effectively. GAO recommends that the Secretary of State develop processes for ensuring that (1) bureaus and missions conduct appropriate risk assessments and (2) grants officials complete required documentation. GAO also recommends that the Secretary of State (3) follow up systematically on recommendations from State's internal reviews of its grants management. State concurred with GAO's recommendations. |
ATF’s mission is to protect communities from violent criminals, criminal organizations, and illegal use and trafficking of firearms, among other things. To fulfill this mission, ATF has 25 field divisions located throughout the United States. To efficiently and effectively carry out its criminal enforcement responsibilities related to firearms, ATF maintains certain computerized information on firearms, firearms transactions, and firearms purchasers. To balance ATF’s law enforcement responsibility with the privacy of firearms owners, Congress has required FFLs to provide ATF certain information about firearms transactions and the ownership of firearms while placing restrictions on ATF’s maintenance and use of such data. In addition to its enforcement activities, ATF also regulates the firearms industry, including issuing firearms licenses to prospective FFLs, and conducting FFL qualification and compliance inspections. A critical component of ATF’s criminal enforcement mission is the tracing of firearms used in crimes to identify the first retail purchaser of a firearm from an FFL. The Gun Control Act of 1968, as amended, established a system requiring FFLs to record firearms transactions, maintain that information at their business premises, and make these records available to ATF for inspection and search under certain prescribed circumstances, such as during a firearms trace. The system was intended to permit law enforcement officials to trace firearms involved in crimes while allowing the records themselves to be maintained by the FFLs rather than by a governmental entity. Figure 1 shows one possible scenario in which a firearm is purchased at an FFL, the FFL maintains records on the purchase, the firearm is used in a crime, and a law enforcement agency recovers the firearm and submits it for tracing. Through the use of these records maintained by FFLs and provided to ATF in certain circumstances, ATF provides firearms tracing services to federal, state, local, and foreign law enforcement agencies. The objective of the trace is to identify the first retail purchaser of the firearm. To carry out its firearms tracing responsibilities, ATF maintains a firearms tracing operation at NTC in Martinsburg, West Virginia. As shown in figure 2, NTC traces firearms suspected of being involved in crimes to the first retail purchaser to assist law enforcement agencies in identifying suspects. NTC generally receives trace requests through eTrace, a web-based submission system, but also receives requests by fax, telephone, and mail. To conduct a trace, NTC must receive the recovered firearm’s description—including manufacturer and serial number—from the law enforcement agency. NTC determines the ownership of the firearm by first conducting automated checks of data systems that are maintained at NTC. If these automated checks do not identify a matching firearm description within the systems, an NTC analyst contacts the chain of distribution for the firearm—the series of businesses that are involved in manufacturing and selling the firearm. For example, after automated data system checks, an NTC analyst may call the manufacturer of the firearm, who informs NTC that the firearm was sold to a certain distributor. The NTC analyst will then call that distributor, and so on until the individual is identified. For many traces, an FFL in the chain of distribution has gone out of business, so an NTC analyst must consult the FFL’s out-of- business records, which are also maintained by NTC. ATF documents each trace request and its results, and provides that information to the law enforcement requester. ATF considers a request completed when it traces the firearm to a retail purchaser, or when it cannot identify the purchaser for various reasons. For example, the description of the firearm as submitted by the requester may not have contained sufficient information to perform a trace. For fiscal year 2015, ATF received a total of 373,349 trace requests, completed 372,992 traces, and identified a retail FFL or a purchaser of the traced firearm in about 68 percent of the completed traces. Since the passage of the Gun Control Act of 1968, Congress has passed provisions that place restrictions on ATF’s handling of FFL records. In 1978, citing to the general authorities contained in the Gun Control Act, ATF proposed regulations that would have required FFLs to report most of their firearms transactions to ATF through quarterly reports. Under the proposed regulations, these FFL reports of sales and other dispositions would not have identified a nonlicensed transferee, such as a retail purchaser, by name and address. These proposed regulations prompted concerns from those who believed that the reporting requirements would lead to the establishment of a system of firearms registration. Since then, Congress has placed restrictions on ATF’s use of funds to consolidate or centralize firearms records, as discussed below. In 1978, the Treasury, Postal Service, and General Government Appropriations Act, 1979, prohibited the use of funds for administrative expenses in connection with the consolidation or centralization of FFL records at the agency, or the final issuance of the 1978 proposed regulations. This restriction was included in each of ATF’s annual appropriations through fiscal year 1993. In 1993, the Treasury, Postal Service, and General Government Appropriations Act, 1994, removed the reference to the 1978 proposed rules, but expanded the prohibition to include the consolidation or centralization of portions of records, and to apply to the use of funds for salaries as well as administrative expenses. This provision was included in each of ATF’s annual appropriations through fiscal year 2011. “hat no funds appropriated herein or hereafter shall be available for salaries or administrative expenses in connection with consolidating or centralizing, within the Department of Justice, the records, or any portion thereof, of acquisition and disposition of firearms maintained by Federal firearms licensees.” ATF collects and maintains data from the firearms industry to carry out its criminal and regulatory enforcement responsibilities, and has established 25 national ATF data systems relating to firearms to maintain the data it collects. Of these 25 data systems, the following 16 data systems contain retail firearms purchaser information: 1. Access 2000 (A2K) 2. ATF NICS Referral 3. Firearm Recovery Notification Program (FRNP) 4. Firearms and Explosives Import System 5. Firearms Information Reporting System 6. Firearms Tracing System 9. Multiple Sales (MS) 10. National Firearms Act System / National Firearms Registration and Transfer Record System 14. Out-of-Business Records Imaging System (OBRIS) 15. Suspect Person Database More details on these systems are provided in appendix II. From the 16 data systems that contain retail purchaser information, we selected 4 systems for an in-depth review of compliance with the appropriations act restriction on consolidation or centralization, and adherence to ATF policies: OBRIS, A2K, FRNP, and MS, including Demand Letter 3. See appendix I for our selection criteria. These systems are operated and maintained by NTC and play a significant role in the firearms tracing process as shown in figure 3. OBRIS is a repository of nonsearchable images of firearms records that allows NTC employees to manually search for and retrieve records during a firearms trace using an FFL number and a firearm description (e.g., serial number). Out-of-business records are integral to the firearms tracing process. According to ATF officials, in approximately 35 to 38 percent of trace requests, there is at least one entity in the chain of distribution that has gone out of business. Therefore, in more than one- third of firearms trace requests, NTC analysts must consult OBRIS at least once. According to ATF data, as of May 5, 2016, there were 297,468,978 images of firearms records in OBRIS. Further, in fiscal year 2015, NTC accomplished 134,226 of 372,992 total completed trace requests using OBRIS. OBRIS was developed in 2006 to assist NTC with maintaining the out-of- business FFL records that are received each year. By statute, when FFLs discontinue their businesses and there is no successor, the records required to be kept under the Gun Control Act of 1968, as amended, must be delivered within 30 days to the Attorney General. This includes all acquisition and disposition logbooks, firearms transactions records—such as Form 4473 that contains purchaser information—and other required records. NTC receives an average of about 1.9 million out-of-business records per month, of which a large percentage are paper-based. Since 2006, when paper records are received from an FFL that has gone out of business, NTC scans them as TIFF image files and stores them in OBRIS. By design, the files are stored as images (with no optical character recognition) so that they cannot be searched using text queries. In addition, ATF sometimes receives electronic FFL out-of- business records in the forms of computer external removable drives and hard drives. In these cases, ATF converts the data to a nonsearchable format consistent with OBRIS records. During processing of OBRIS records, NTC conducts a quality-assurance process, including document sorting, scanning, and error checks on 100 percent of the records received. Officials stated that the imaged records are maintained indefinitely in OBRIS. For more information on OBRIS, see appendix III. ATF implemented A2K in 1995 at the request of firearms industry members to allow manufacturer, importer, and wholesaler FFLs to more efficiently respond to requests from NTC for firearms traces. By statute, FFLs are required to respond within 24 hours to a firearms trace—a request from ATF for firearms disposition information—needed for a criminal investigation. Normally, when an NTC analyst contacts an FFL in the chain of distribution during a trace, the analyst contacts the FFL by phone, fax, or e-mail. ATF officials reported that this can be burdensome if the FFL receives a large number of trace requests, and that such requests can number more than 100 per day. With A2K—a voluntary program—the participating industry member uploads electronic firearms disposition records (i.e., information on the FFL or, in rare cases, the individual to whom the firearm was sold) onto a server that ATF owns and maintains, but is located at the site of the industry member. A2K provides a secure user web interface to this server, through which authorized NTC personnel can search—by firearm serial number only—to obtain disposition data for a firearm during a trace. According to the A2K memorandum of understanding with industry members, each participating industry member maintains ownership over its data. Further, NTC access to A2K’s search function is limited to analysts conducting traces for each particular industry member. NTC analysts access A2K using a different URL and login information for each participating industry member, and can only retrieve the disposition data for the particular firearm they are tracing. Participation in A2K is voluntary and, according to ATF officials and the three industry members we spoke with, can reduce an industry member’s costs associated with responding to firearms trace requests. According to ATF officials, as of April 25, 2016, there are 35 industry members using A2K, which account for 66 manufacturer, importer, and wholesaler FFLs. All three of the participating industry members we spoke with agreed that A2K has been beneficial since it reduces the industry member resources necessary to respond to trace requests. A2K also benefits NTC by providing immediate access to industry member data at all times, thereby allowing tracing operations to continue outside of normal business hours, which can be crucial for urgent trace requests. According to ATF data, as of March 17, 2016, there were 290,256,532 firearms in A2K. Further, in fiscal year 2015, NTC accomplished 130,982 of 372,992 total completed trace requests using A2K. Established in 1991, FRNP (formerly known as the Suspect Gun Program) provides a criminal investigative service to ATF agents by maintaining a database of firearms that have not yet been recovered by law enforcement, but are suspected to be involved in criminal activity. An ATF agent submits firearms information to FRNP, in connection with a specific ATF criminal investigation, to flag a particular firearm so that in the event that it is recovered and traced at some future time, the requesting agent will be notified. A request to enter a firearm into FRNP could start with an ATF agent recovering another firearm during an undercover investigation of illegal sales from a firearms trafficker. By searching eTrace, the agent may discover that the recovered firearm was part of a multiple sale with three other firearms. The ATF agent then may request that the other three firearms be entered into FRNP because they are associated with the firearm the agent recovered and, therefore, are likely to also be trafficked. ATF officials stated that, in this hypothetical case, it is likely that those three firearms, if recovered and traced in the future, would support a potential firearms trafficking case. If the firearms are in FRNP, if and when they are recovered and traced, NTC would notify the requesting agent, who could then contact the agency that recovered and traced the firearms to coordinate building such a case. To enter a firearm into FRNP, an ATF agent submits ATF Form 3317.1 (see app. IV) to NTC. According to ATF, no other law enforcement agencies may submit firearms to FRNP or view information in the system; only ATF agents and NTC staff have access. When a firearm is recovered in a crime and is traced, NTC conducts an automated check to determine whether the firearm description in the trace request matches a firearm description in FRNP. If so, an analyst will validate that the entries match. If they do, NTC generally notifies the ATF agent who submitted the firearm for inclusion in FRNP that the firearm has been recovered and traced. Then, the analyst completes the trace and sends the results to the requester of the trace. Occasionally, in submitting the firearm to FRNP, the agent directs NTC to not complete the trace on the firearm in the event that the firearm is recovered and traced (i.e., not provide the trace results to the law enforcement agency who requested the trace). For example, an agent might want to prevent trace information from being released to protect an undercover operation or other investigation. According to ATF data, as of May 3, 2016, there were 174,928 firearms and the names of 8,705 unique persons (e.g., criminal suspects, firearms purchasers, associates) in FRNP, making up 41,964 total FRNP records. Further, in fiscal year 2015, NTC accomplished 110 of 372,992 total completed trace requests using FRNP. Also, according to ATF data, as of May 5, 2016, there were 23,227 firearms in FRNP that had been linked to a firearms trace. Once the ATF investigation that led to the FRNP firearms submission has been closed, any FRNP entries associated with that investigation are to be labeled as “inactive” in FRNP. Information from inactive records is used to assist with the tracing process, but when a trace hits on an inactive FRNP record, NTC does not notify the ATF agent who submitted the firearm since the associated investigation is closed and the information would no longer be useful to the agent. According to our review of all FRNP records, as of July 2015, about 16 percent of the 41,625 records were designated “active” and about 84 percent were designated “inactive.” Inactive records remain in the system for tracing purposes. The original submission form is also preserved as a digital image. MS was developed in 1995 to collect and track reports of the purchase by one individual of two or more pistols or revolvers, or both, at one time or during any 5 consecutive business days. FFLs are required by statute to report these sales to ATF. The multiple sales reports are completed by FFLs, submitted to NTC using ATF form 3310.4 (see app. V), and entered into MS. According to ATF, these reports, when cross-referenced with firearms trace information, serve as an important indicator in the detection of potential firearms trafficking. They can also allow successful tracing of older firearms that have reentered the retail market. MS also maintains the information from Demand Letter 3 reports. In 2011, ATF issued Demand Letter 3 to dealer and pawnbroker FFLs located in Arizona, California, New Mexico and Texas. The letter requires these FFLs to prepare reports of the purchase or disposition of two or more semiautomatic rifles capable of accepting a detachable magazine and with a caliber greater than .22, at one time or during any 5 consecutive business days, to a person who is not an FFL. According to ATF, this information is intended to assist ATF in its efforts in investigating and combatting the illegal movement of firearms along and across the southwest border. Demand Letter 3 reports are completed by FFLs, submitted to NTC using ATF form 3310.12 (see app. VI), and entered into MS. According to ATF officials and our observations, Demand Letter 3 and multiple sales reports are managed identically within MS. During a firearms trace, MS is automatically checked for a match with the firearm serial number. If a match is found, the trace time can be substantially shortened since the retail FFL and purchaser name to complete the trace are contained within the MS record. According to ATF data, as of May 3, 2016, there were 8,950,209 firearms in MS, making up 3,848,623 total MS records. Further, in fiscal year 2015, NTC accomplished 15,164 of 372,992 total completed trace requests using MS. In November 1995, ATF implemented a policy to computerize multiple sales reports at NTC, which now also applies to Demand Letter 3 reports. The original multiple sales or Demand Letter 3 paper report received from the FFL is scanned in a nonsearchable, TIFF image format and tagged with the MS transaction number. The TIFF file is then stored in an image-only repository, and is retained indefinitely. However, as part of the computerization policy, ATF included a requirement for deleting firearms purchaser names from MS 2 years after the date of sale if such firearms are not connected to a trace. ATF preserves the remainder of the data, such as the firearm description, for the purpose of supporting investigations. In contrast, if an MS record is connected to a firearms trace, then ATF preserves the entire record, including purchaser information, in the system. MS reports are available to any ATF staff that has access to eTrace but not to outside law enforcement agencies with eTrace access. However, after the purchaser name in a MS record has been deleted in accordance with the 2-year deletion policy, only NTC officials have access to this information in the digital image of the original multiple sales or Demand Letter 3 reports. If an ATF agent needs to see the deleted information, the agent must contact NTC. Of the four data systems we reviewed, two systems were in full compliance with the appropriations act restriction. The other two data systems did not always comply with the restriction, although ATF addressed the compliance issues during the course of our review. In addition, three data systems could better adhere to ATF policies. Specifically: OBRIS complies with the appropriations act restriction and adheres to ATF policies. A2K for in-business industry members’ records complies with the appropriations act restriction, but ATF’s collection and maintenance of A2K out-of-business records in A2K on a server at NTC violated the appropriations act restriction. ATF deleted the records from the server in March 2016. In addition, industry members may benefit from clearer ATF guidance to ensure that they are submitting out-of-business records as required. FRNP generally complies with the appropriations act restriction. However, a regional program using FRNP from 2007 through 2009 did not comply with the restriction, and ATF removed the data it collected through this program from FRNP in March 2016. Further, FRNP generally adheres to ATF policies, but a technical defect allows ATF agents to view and print FRNP data beyond what ATF policy permits. MS complies with the appropriations act restriction, but ATF continues to inconsistently adhere to its own policy when deleting these records. For a more detailed legal analysis of compliance with the appropriations act restriction, see appendix VII. We previously considered ATF’s compliance with the restriction on using appropriated funds for consolidation or centralization in connection with ATF’s Microfilm Retrieval System and MS in 1996. In that report, we stated that the appropriations act restriction did not preclude all information practices and data systems that involved an element of consolidation or centralization. We interpreted the restriction in light of its purpose and in the context of other statutory provisions governing ATF’s acquisition and use of information on firearms. We found that the two systems complied with the appropriations act restriction on the grounds that ATF’s consolidation of records in these systems was incident to carrying out specific responsibilities set forth in the Gun Control Act of 1968, as amended, and that the systems did not aggregate data on firearms transactions in a manner that went beyond these purposes. We are employing a similar analytical approach to the systems under review here: we consider whether ATF’s aggregation of records in each system serves a statutory purpose, and how it relates to that purpose. OBRIS complies with the appropriations act restriction and adheres to policies designed to help ensure that the system is in compliance with the restriction. FFLs are specifically required to submit records to ATF when going out of business, and the system limits the accessibility of key firearms records information, such as retail purchaser data. As we reported in 1996, ATF first issued regulations in 1968 requiring FFLs that permanently go out of business to deliver their firearms transaction records to the federal government within 30 days. This provided a means of accessing the records for firearms tracing purposes after an FFL went out of business. The legislative history related to ATF’s fiscal year 1979 appropriation did not provide any indication that Congress intended a change in ATF’s existing practice. In 1986, the Firearms Owners’ Protection Act (FOPA) codified this regulatory reporting requirement, affirming ATF’s authority to collect this information. In 1996, we also reported that the predecessor to OBRIS—the Microfilm Retrieval System—as designed, complied with the statutory data restrictions and that ATF operated the system consistently with its design. We found that the Microfilm Retrieval System included in a computerized index the information necessary to assist ATF in completing a firearms trace, and did not aggregate information in a manner beyond that necessary to implement the Gun Control Act. Notably, ATF’s system of microfilmed records did not capture and store certain key information, such as firearms purchaser information, in a searchable format. In response to logistical challenges and technological advances, ATF developed OBRIS in 2006 as the repository to maintain digital images of out-of-business FFL records. ATF transitioned from using microfilm images of records to scanning records into OBRIS as digital images not searchable through character recognition, consistent with ATF’s design and use of its prior Microfilm Retrieval System. It is our view that, like its microfilm predecessor system, OBRIS also complies with the appropriations act restriction because OBRIS’s statutory basis and accessibility are essentially the same as the prior system. As with the prior system, OBRIS generally allows users to identify potentially relevant individual records through manual review by searching an index using an FFL number. Other information, specifically firearms purchaser information, remains stored in nonsearchable images, and is not accessible to ATF through a text search. In OBRIS, ATF put data processing policies in place to maintain records in compliance with the appropriations act restriction. Specifically, when an FFL going out of business sends records to NTC, according to ATF policy and verified by our observations, NTC personnel follow policies to sort and scan the records in OBRIS in a manner that maintains the nonsearchability of the records. For example, NTC personnel spend extra time indexing the images by FFL number, and chronologically sorting FFL records, typically by month and by year. When tracing a firearm, according to ATF policy and verified by our observations, NTC personnel generally identify a group of FFL records through the FFL number index, then manually search the dates of the FFL records to narrow in on a group of records that might contain the firearm being traced. NTC personnel then manually skim through each record in this group until they identify the relevant firearm information. According to NTC officials, NTC staff sometimes search thousands of pages of records to find the record that matches the trace request. This policy for a manual process to maintain and use records in OBRIS helps to ensure its compliance with the appropriations act restriction. For more details on OBRIS’s data processing policies, see appendix III. ATF maintains A2K for in-business industry members who store their own A2K data and maintained A2K for certain records of out-of-business industry members at NTC. ATF’s collection and maintenance of the records of out-of-business A2K industry members at NTC violated the appropriations act restriction on consolidation or centralization of firearms records. However, ATF officials transferred the records to OBRIS, and in March 2016 removed these records from A2K. In addition, industry members would benefit from clearer A2K guidance from ATF to ensure that they are submitting required out-of-business records. A2K for firearms records of in-business industry members complies with the appropriations act restriction on consolidation and centralization based on A2K’s statutory foundation and its features. ATF believes, and we agree, that A2K for in-business records appropriately balances the restriction on consolidating and centralizing firearms records with ATF’s need to access firearms information in support of its mission to enforce the Gun Control Act of 1968, as amended. Federal law requires FFLs to provide firearms disposition information to ATF within 24 hours in response to a trace request in the course of a criminal investigation. ATF officials told us that they developed A2K in response to industry member requests for an automated option for responding to trace requests. Prior to A2K, FFLs could only respond to trace requests by having dedicated personnel research firearms disposition information and then submit that information to ATF by phone, fax, or e-mail. In contrast, A2K provides industry members—who voluntarily participate in A2K—with servers to facilitate automated electronic responses to ATF trace requests. Under A2K, industry members upload their electronic firearms disposition information onto the servers located at their premises on a regular basis. Industry members— not ATF—retain possession and control of their disposition records and, according to ATF officials, they may withdraw from A2K and remove their records from the servers at any time. A2K includes a secure user web interface to each of the servers and ATF may only obtain A2K disposition information by searching individual industry member servers by exact firearm serial number. Through this search, ATF obtains the same information from each industry member as it would otherwise obtain by phone, fax, or e-mail, and in similar disaggregated form. Beginning in 2000, ATF maintained A2K disposition data from out-of- business industry members on a single partitioned server within NTC, and removed the records from the server in March 2016. ATF’s maintenance of the disposition records in this manner violated the appropriations act restriction on consolidation or centralization. This arrangement was not supported by any specific authority. As described earlier, A2K was designed as an alternative for FFLs to meet the requirement to respond promptly to ATF trace requests, which does not apply to FFLs once they go out of business. Another statutory provision requires FFLs to submit firearms records to ATF when they go out of business, and ATF has designed a separate system for this purpose—OBRIS—as described earlier. A2K for out-of-business records functioned differently than OBRIS and went beyond the consolidation of out-of-business records in that system incident to specific responsibilities under the Gun Control Act. As discussed earlier, out-of-business records are maintained as nonsearchable digital images in OBRIS to comply with the appropriations act restriction, while at the same time allowing ATF to perform its tracing function. ATF completed traces using A2K disposition data from out-of- business industry members through the same type of secure user web interface as used while the industry members were in business. According to ATF, this was more efficient than relying on OBRIS to complete firearms traces. Our observations of A2K out-of-business searches in August 2015 confirmed ATF officials’ statements that these records were accessed in the same way as in-business records. Records were only retrievable by exact serial number search, in accordance with ATF policy. However, according to ATF officials, it would have been technically possible for ATF to reconfigure the server to allow the records to be queried by any field, including fields with retail purchaser information. ATF agreed with our assessment that treating disposition information from industry members that go out of business in the same manner as disposition information from in-business industry members would violate the appropriations act restriction. After we raised concerns about A2K out-of-business records on the server at NTC, ATF told us that they had begun a process of transferring the out-of-business A2K records from the server into OBRIS as digital images. ATF permanently deleted the records from the out-of-business A2K server in March 2016. In addition, ATF could provide clearer ATF guidance to ensure that industry members submit out-of-business records in accordance with the Gun Control Act of 1968, as amended. These industry members and their corresponding FFLs are required to provide transaction forms, acquisition records, and disposition records to ATF within 30 days of going out of business. However, it is unclear how the requirements apply to industry members’ A2K disposition data. A2K agreements specifically state that the A2K data belong to the industry member. Conversely, ATF requires that the ATF-owned A2K equipment be returned when industry members go out of business, which includes the hardware and software on which the data were housed at the industry member’s location. The A2K memorandums of understanding and ATF guidance to industry members do not specify that industry members may retain the backup disk or how A2K data may be used to meet the out-of-business record submission requirements to ATF, if at all. All of the eight industry members that have gone out of business have provided their backup disks with data to ATF. According to ATF, six industry members separately provided their acquisition and disposition information, while the other two industry members, which were licensed importers, only provided invoices. According to ATF officials, discussions with these industry members did not include the industry member’s option to keep the backup disk where the data are stored or whether submitting the backup disk to ATF would fulfill part of the industry member’s submission requirement. Further, the three industry members we spoke with corroborated that ATF lacks guidance for its requirements related to industry members submitting out-of-business A2K data in accordance with the Gun Control Act, as amended. Federal internal control standards require that agencies communicate necessary quality information with external parties to achieve agency objectives, which includes providing industry members with record submission guidance so that ATF has the necessary records for firearms tracing. According to ATF officials, ATF has not provided guidance to A2K industry members on how to submit out-of-business records because industry members already have the standard requirements that apply to all FFLs, and industry members have not asked for guidance specific to A2K. Industry members that we spoke to had not contemplated the process for providing A2K equipment and records to ATF because they did not anticipate going out of business. However, if ATF does not have all required out-of-business records, the agency may not be able to locate the first purchaser of a firearm during a trace, and thus may not be able to fulfill part of its mission. ATF officials agreed that providing such guidance—for example, in the A2K memorandum of understanding between an industry member and A2K—would be helpful to industry members to ensure that records are submitted to ATF as required. Industry members could benefit from clear ATF guidance on, for example, whether they are required to submit their A2K records in electronic format; whether they are allowed to only submit hard copy records; or what to do if one part of the company goes out of business, but A2K continues at the industry member’s remaining FFLs. Such ATF guidance could clarify how industry members may submit A2K data to fulfill a portion of Gun Control Act requirements. FRNP generally complies with the appropriations act restriction and generally adheres to ATF policies that help ensure such compliance. However, a regional ATF program using FRNP from 2007 through 2009 was not in compliance with the appropriations act restriction. ATF deleted the data it collected through this program from FRNP in March 2016. In addition, a technical defect in one of ATF’s key data systems allows ATF agents to access FRNP records in a manner that is inconsistent with ATF policy. ATF gathers and combines specific firearms transaction data to a limited degree in FRNP in order to implement its statutory responsibilities related to firearms criminal enforcement and, in this respect, the system complies with the appropriations act restriction. By statute, ATF is responsible for enforcing the federal statutes regarding firearms, including those related to the illegal possession, use, transfer, or trafficking of firearms. FRNP was established to provide an investigative service to ATF agents by maintaining a database of firearms suspected of being involved in criminal activity and associated with an ATF criminal investigation. As discussed earlier, the appropriations act restriction does not preclude all information practices and data systems that involve an element of “consolidating or centralizing” FFL records. As designed, the aggregation of firearms transaction records in FRNP is incident to carrying out specific ATF criminal enforcement responsibilities and is limited to that purpose. Therefore, FRNP—when used for the purpose as a database of firearms suspected of being involved in criminal activity and associated with an ATF criminal investigation—complies with the appropriations act restriction. Moreover, based on our analysis of FRNP records, virtually all records in FRNP are associated with an ATF criminal investigation, and thus are related to ATF’s statutory responsibilities. ATF policies for the implementation of FRNP support the conclusion that it complies with the appropriations act restriction, when operated as designed. ATF policies specify that ATF agents may submit a firearm for entry into FRNP if the firearm is associated with an active, nongeneral ATF criminal investigation and meets certain submission criteria. ATF agents must use a designated submission form when requesting that firearms information be entered in the FRNP system, which, among other things, contains a field for the agent to include an active, nongeneral investigation number. The form also contains a field to indicate the additional, specific submission criteria for the firearm, which align with ATF’s statutory responsibility of enforcing criminal statutes related to the illegal possession, use, transfer, or trafficking of firearms. These criteria include: (1) Large quantities of firearms purchased by individual; (2) Firearms suspected in trafficking, but not stolen from an FFL dealer; (3) FFL dealers suspected of performing firearms transactions without proper documentation; (4) Firearms purchased by suspected straw purchasers; and (5) Other—a category that the submitting agent is to explain on the form. According to NTC procedures, and verified by our observations, upon receiving an FRNP submission form, an NTC analyst reviews the form for completeness and conducts several validation and verification steps. For example, the analyst uses ATF’s case-management system to verify that the investigation number on the FRNP submission form is active and that at least one criterion was selected on the submission form. Once the validation and verification checks are complete, the NTC analyst either enters the firearms information into FRNP or contacts the requesting ATF agent if information is missing or not in alignment with the criteria required for FRNP submission. During our review of selected fields for all 41,625 FRNP records, and a generalizable sample of records and submission forms, we found that for the vast majority of firearms entered, ATF abided by its policy for entries to be associated with an active investigation. Out of the entire population of 41,625 records reviewed, less than 1/10 of 1 percent of records were not associated at all with an investigation number and, according to ATF officials, were likely data-entry errors or records entered for testing or training purposes. Moreover, based on our sample review, an estimated 96 percent of FRNP records were entered while the related criminal investigation was open. ATF officials stated that most of the remaining records—entered before the related investigation was open or after it was closed—were the result of data-entry errors or the result of investigation numbers being reopened at a later date. Additional, specific submission criteria were required to be noted on the FRNP submission form since November 2004. Based on our sample review, an estimated 97 percent of FRNP submission forms from November 2004 through July 2015 included the selection of at least one criterion. For an estimated 13 percent of these—or 23 submission forms in our sample—the “Other” criteria was selected, and all but 2 of these had an explanation for why the firearms were entered in FRNP. For example, in 1 submission form that contained an explanation for “Other,” business owners were suspected of selling firearms without a license. ATF officials could not definitively state why an estimated 3 percent of submissions from November 2004 through July 2015 did not contain criteria selection. Officials speculated, for example, that an NTC analyst may have obtained the criteria selection from the requesting agent by phone or e-mail and may not have noted his or her conversation in the FRNP file. However, officials acknowledged that the criteria selection is an important quality control and allows ATF the ability to audit records related to an investigation if necessary. ATF officials told us that only names associated with the criminal investigation are entered in the FRNP system. These names are generally limited to suspects and purchasers, but ATF officials acknowledged that the names of victims or witnesses may be included in the system if they are associated with the criminal investigation, though this does not happen routinely. Based on our observations of FRNP entry procedures, an NTC analyst verifies that any names on the submission form match the names listed in the case-management system for that particular investigation, prior to entering the information in the FRNP system. An ATF regional program conducted from 2007 through 2009 to enter firearms into FRNP—the Southwest Border Secondary Market Weapons of Choice (SWBWOC) Program—did not comply with the appropriations act restriction on consolidating or centralizing FFLs’ firearms records, because the individual firearms were not suspected of being involved in criminal activity associated with an ATF criminal investigation. During the course of our review, ATF reported that it planned to delete the related data from FRNP, and ATF did so in March 2016. According to ATF officials, the SWBWOC Program was in place in ATF’s four southwest border field divisions in order to more effectively identify— during a trace—the purchasers of used firearms trafficked to Mexico. The program was implemented during routine regulatory inspections of FFLs in the region who were engaged primarily in the sale of used firearms—generally pawnbrokers. According to ATF, used firearms sales, referred to as “secondary market” sales, played a significant role in firearms trafficking to Mexico, particularly certain firearms most sought by the Mexican drug cartels, referred to as “weapons of choice.” According to ATF officials, this program was developed to record certain firearms in an effort to enhance ATF’s ability to trace those firearms to a retail purchaser in the event of crime-related recoveries of the firearms. As part of the program, during regulatory inspections, ATF investigators were to record any specified weapons of choice that were found in the FFLs’ inventory or sold or disposed of by the FFLs within the inspection period. According to ATF officials, the information recorded was limited to the serial number and description of the firearm, and was not to include any purchaser information. The firearms information was then submitted to FRNP for all of the used firearms identified during the inspection. If the firearm was subsequently recovered by law enforcement and submitted for a trace, NTC’s automatic checks on the firearm description would result in a match in the FRNP system. ATF would then be able to more quickly identify the FFL pawn shop that previously had the firearm in its inventory. According to ATF officials and documentation, the program was cancelled on October 2, 2009, following ATF’s legal review of the process by which the firearms information entered during the program was recorded and submitted to FRNP. ATF’s legal review determined that the program was not consistent with the appropriations act restriction on consolidation or centralization. According to ATF officials, the program was not reviewed by the ATF Chief Counsel’s office prior to its initiation in June 2007. They stated that the program’s existence was the result of incomplete communication by ATF executives responsible for industry operations programs with ATF’s Chief Counsel prior to the implementation of the program. Upon learning of the program, ATF Counsel determined that FFL information on a firearm, in and of itself—even when unaccompanied by purchaser information—is not permitted to be collected and consolidated without a specific basis in statute or regulation, or a direct nexus to a law enforcement purpose, such as a criminal investigation. The ATF Chief Counsel’s office advised that the program be immediately terminated and, in October 2009, the program was cancelled and the firearms information already entered into FRNP during the program was marked as “Inactive.” We concur with ATF’s assessment that the inclusion of firearms information from the program in FRNP did not comply with the appropriations act restriction. It is our view that information obtained from an FFL about a firearm in and of itself, and unaccompanied by purchaser information, is not permitted to be collected and consolidated within ATF without a specific basis in statute. As a result of our review, ATF officials deleted the records for the affected data from FRNP—855 records relating to 11,693 firearms—in March 2016. A technical defect in eTrace 4.0 allows ATF agents to view and print FRNP data beyond what ATF policy permits. These data include purchaser names and suspect names in a summary format called a Suspect Gun Summary Report. Any ATF agent with eTrace access can view or print these reports, including up to 500 FRNP records at one time. According to ATF officials, the eTrace defect occurred when the contractor developing eTrace 4.0 included a global print function for Suspect Gun Summary Reports—which can contain retail purchaser information—that was accessible from the search results screen. In December 2008, prior to the release of eTrace 4.0 in 2009, ATF provided the contractor with a list of the new system’s technical issues, including this FRNP printing defect. ATF officials explained that because all ATF eTrace users had the appropriate security clearances, and because there would not be a reason for ATF agents to access the Suspect Gun Summary Reports, the print issue was not considered a high-priority concern. However, ATF officials told us that no audit logs or access listings are available to determine how often ATF agents have accessed records containing purchaser information. Therefore, ATF has no assurance that the purchaser information entered in FRNP and accessible through eTrace is not being improperly accessed. eTrace is available to federal, state, and local law enforcement entities that have entered into an eTrace memorandum of understanding with ATF. ATF agents have access to information in eTrace that is unavailable to state and local law enforcement entities, such as FRNP data. However, according to eTrace system documentation, ATF agents are to be limited in their access to FRNP records. Specifically, ATF agents should only be able to view the firearm description and the name and contact information of the ATF case agent associated with the investigation, and not purchaser information or FFL information. If an ATF agent wanted further information about the FRNP data, the agent should have to contact the case agent. ATF officials told us that ATF’s policy is intended to provide FRNP information to ATF agents on a “need-to-know” basis in order to protect the security of ATF investigations, and protect gun owner information. Moreover, federal internal control standards specify that control activities to limit user access to information technology include restricting authorized users to the applications or functions commensurate with assigned responsibilities. According to ATF officials, options are limited for resolving the global print function defect. ATF’s contract with the eTrace 4.0 developer has ended, and therefore ATF cannot contact the developer to fix the printing issue. ATF could have the issue resolved when a new version of eTrace, version 5.0, is released, but there is no timeline for the rollout of eTrace 5.0. ATF officials told us that, in the short term, one method to fix the printing issue would be to remove individuals’ names and identifying information from the FRNP system, so it is not available for Suspect Gun Summary Reports. The firearms information and case agent information would remain available to all ATF agents, and ATF officials indicated that they did not think that removing the identifying information would hamper ATF agents’ investigations. Developing and implementing short-term and long-term mechanisms to align the eTrace system capability with existing ATF policy to limit access to purchaser information for ATF agents could ensure that firearms purchaser information remains limited to those with a need to know. MS complies with the appropriations act restriction; however, ATF lacks consistency among its MS deletion policy, system design, and policy implementation timing. Since we reported on MS in 1996, ATF has made minimal changes to the system itself, but the information contained in MS has changed with the inclusion of Demand Letter 3 reports, in addition to multiple sales reports. Multiple sales reports. By statute, FFLs are required to provide to ATF a multiple sales report whenever the FFL sells or otherwise disposes of, within any 5 consecutive business days, two or more pistols or revolvers, to an unlicensed person. The reports provide a means of monitoring and deterring illegal interstate commerce in pistols and revolvers by unlicensed persons. ATF’s maintenance of multiple sales reports in MS complies with the appropriations act restriction because of ATF’s statutory authority related to multiple sales reports, and the lack of significant changes to the maintenance of multiple sales reports in MS since we found it to be in compliance in 1996. As we reported in 1996, ATF operates MS with specific statutory authority to collect multiple sales reports. In 1975, under the authority of the Gun Control Act of 1968, ATF first issued regulations requiring FFLs to prepare multiple sales reports and submit those reports to ATF. The legislative history related to ATF’s fiscal year 1979 appropriations act restriction did not provide any indication that Congress intended a change in ATF’s existing practice. In 1986, a provision of FOPA codified FFLs’ regulatory reporting requirement, affirming ATF’s authority to collect multiple sales reports. In addition, this provision required, among other things, FFLs to forward multiple sales reports to the office specified by ATF. Therefore, under this provision, ATF was given the statutory authority to specify that FFLs forward multiple sales reports to a central location. In our 1996 report, we examined MS and found that it did not violate the prohibition on the consolidation or centralization of firearms records because ATF’s collection and maintenance of records was incident to its specific statutory responsibility. As we noted at that time, multiple sales reports are retrievable by firearms and purchaser information, such as serial number and purchaser name. We did not identify any significant changes to the maintenance of the multiple sales reports since we last reported on ATF’s compliance with the statutory restriction that would support a different conclusion in connection with this review. Demand Letter 3 reports. In 2011, in an effort to reduce gun trafficking from the United States to Mexico, ATF issued demand letters to FFLs classified as dealers or pawnbrokers in four southwest border states: Arizona, California, New Mexico, and Texas. The letter, referred to as Demand Letter 3, required these FFLs to submit a report to ATF on the sale or other disposition of two or more of a specific type of semiautomatic rifle, at one time or during any 5 consecutive business days, to an unlicensed person. Federal courts that have considered the issue have held that ATF’s collection of Demand Letter 3 reports are consistent with the appropriations act restriction. It is our view that ATF’s maintenance of Demand Letter 3 reports in MS is consistent with the appropriations act restriction in light of the statutory basis for Demand Letter 3, the courts’ decisions, and the way in which the records are maintained. ATF has specific statutory authority to collect reports like Demand Letter 3 reports. As discussed, FFLs are required to maintain certain firearms records at their places of business. By statute, FFLs may be issued letters requiring them to provide their record information or any portion of information required to be maintained by the Gun Control Act of 1968, as amended, for periods and at times specified by the letter. Some FFLs have challenged the legality of Demand Letter 3 reports for a number of reasons, including that it did not comply with the appropriations act restriction. Federal courts that have considered the issue have upheld ATF’s use of Demand Letter 3 as consistent with the appropriations act restriction. In one case before the U.S. Court of Appeals for the Tenth Circuit, the FFL contended that the demand letter created a national firearms registry in violation of the restriction on consolidation or centralization. The Tenth Circuit stated that the plain meaning of “consolidating or centralizing” does not prohibit the mere collection of some limited information. The court went on to state that the July 2011 demand letter requested very specific information from a limited segment of FFLs. In addition, the court pointed out that Congress authorized the issuance of the letters in 1986, after passing the first appropriations act restriction, and Congress could not have intended to authorize the record collection in statute while simultaneously prohibiting it in ATF’s annual appropriations act. In other similar cases, the courts have also held that ATF had the authority to issue the demand letter and that ATF’s issuance of the demand letter complied with the appropriations act restriction. In addition, Demand Letter 3 reports are maintained in MS in an identical manner to multiple sales reports. Although not required by statute, ATF policy requires that firearms purchaser names be deleted from MS 2 years after the date of the reports, if the firearm has not been connected to a firearms trace. However, ATF’s method to identify records for deletion is not comprehensive and, therefore, 10,041 names that should have been deleted remained in MS until May 2016. According to ATF officials, because of MS system design limitations, analysts must write complex queries to locate such names in MS. For example, since the information needed to identify the correct records could exist in free-form fields, the success of the queries in comprehensively identifying all appropriate records depends on consistent data entry of several text phrases throughout the history of the system. In addition, ATF’s queries have inconsistently aligned with its system design—for instance, as the system was modified and updated, the query text remained aligned with the outdated system—and therefore these queries resulted in incomplete identification of records to be deleted. Changes to MS to address system query limitations would require a system-wide database enhancement, but there is currently not an operations and maintenance support contract in place for this system. Moreover, even if the system could ensure that deletions capture all required records, ATF has inconsistently adhered to the timetable of deletions required by its policy. For example, according to ATF’s deletion log and our verification of the log, some records entered in 1997 were not deleted until November 2009—about 10 years after the required 2 years. As shown in table 1 below, ATF’s timing for implementing deletions did not adhere to ATF policy directives. As shown in table 1 below, the ATF deletion policy for MS has changed over time including variations in the frequency of deletions (e.g., annually, monthly, weekly), and pauses to the deletion policy because of, according to ATF officials, litigation and requests from Congress. According to NTC officials, delayed deletions occurred because deleting a large number of records at once negatively affects the system, slowing system response time or stopping entirely the larger related data system. However, according to NTC’s deletion log and verified by our observations of NTC system queries, deletions were conducted in average increments of almost 100,000 records per day—representing on average a full year’s worth of records to be deleted. In addition, ATF confirmed that a single deletion of 290,942 records on one day in January 2011 did not affect the system. Therefore, system constraints do not seem to be the reason for the delayed deletion. ATF did not identify further causes for the delays in deletions. ATF reported that the objective for its deletion policy was primarily to delete data that may not be useful because of its age and to safeguard privacy concerns related to retaining firearms purchaser data. Federal internal control standards require control activities to help ensure that management’s directives are carried out. Additionally, information systems and related control activities should be designed to achieve objectives and respond to risks. Specifically, an organization’s information system should be designed by considering the processes for which the information system will be used. For example, to alleviate the risk of not meeting the objectives established through the MS deletion policy, ATF must ensure the policy is consistent with the design of the MS data system and ATF must ensure that it meets the policy’s timeline requirements. In September 1996, we reported that ATF had not fully implemented its 2-year deletion requirement. During the course of our 1996 review, ATF provided documentation that it had subsequently deleted the required records and that it would conduct weekly deletions in the future. Similarly, as a result of our current review, according to ATF documentation, in May 2016, the agency deleted the 10,041 records that should have been deleted earlier. However, given that this has been a 20-year issue, it is critical that ATF develop consistency between its deletion policy, the design of the MS system, and the timeliness with which deletions are carried out. By aligning the MS system design and the timeliness of deletion practices with its policy, ATF could ensure that it maintains only useful purchaser information while safeguarding the privacy of firearms purchasers. ATF has an important role in combatting the illegal use of firearms, and must balance this with protecting the privacy rights of law-abiding firearms owners. Of the four ATF firearms data systems we reviewed that contained firearms purchaser information, we found that certain aspects of two of these systems violated the appropriations act restriction on consolidating or centralizing FFL firearms records, but ATF resolved these issues during the course of our review. With regard to ATF policies on maintenance of firearms records, ATF should do more to ensure that these policies are followed and that they are clearly communicated. Specifically, providing guidance to industry members participating in A2K for how to submit their records when they go out of business would help ensure they submit required records to ATF. Without this clear guidance, ATF risks not being able to locate the first purchaser of a firearm during a trace, and thus may not be able to fulfill part of its mission. In addition, aligning eTrace system capability with ATF policy to limit access to firearms purchaser information in FRNP would ensure that such information is only provided to those with a need to know. Finally, aligning the MS system design and the timeliness of deletion practices with the MS deletion policy would help ATF maintain only useful purchaser data and safeguard the privacy of firearms purchasers. In order to help ensure that ATF adheres to its policies and facilitates industry compliance with requirements, we recommend that the Deputy Director of ATF take the following three actions: provide guidance to FFLs participating in A2K for provision of out-of- business records to ATF, so that FFLs can better ensure that they are in compliance with statutory and regulatory requirements; develop and implement short-term and long-term mechanisms to align the eTrace system capability with existing ATF policy to limit access to FRNP purchaser information for ATF agents; and align the MS deletion policy, MS system design, and the timeliness of deletion practices to improve ATF’s compliance with the policy. We provided a draft of this report to ATF and DOJ on May 25, 2016 for review and comment. On June 16, 2016, ATF provided an email response, stating that the agency concurs with all three of our recommendations and is taking several actions to address them. ATF concurred with our recommendation that ATF provide guidance to FFLs participating in A2K for provision of out-of-business records to ATF. ATF stated that the agency is modifying its standard Memorandum of Understanding with A2K participants to incorporate specific guidance regarding the procedures to be followed when a participant goes out of business. ATF also stated that, as a condition of participation, all current and future A2K participants will be required to adopt the revised Memorandum of Understanding. The implementation of such guidance in the Memorandum of Understanding for A2K participants should meet the intent of our recommendation. ATF concurred with our recommendation that ATF develop and implement mechanisms to align the eTrace system capability with existing ATF policy to limit access to FRNP purchaser information for ATF agents. ATF stated that, in the short term, the agency will delete all purchaser information associated with a firearm entered into FRNP, and will no longer enter any purchaser information into FRNP. ATF stated that, in the long term, the agency will modify the Firearms Tracing System to remove the purchaser information fields from the FRNP module, and will modify eTrace as necessary to reflect this change. These short- and long-term plans, if fully implemented, should meet the intent of our recommendation. ATF concurred with our recommendation that ATF align the MS deletion policy, MS system design, and the timeliness of deletion practices to improve ATF’s compliance with the policy. As we reported above, ATF stated that the agency deleted all purchaser names from MS that should have been deleted earlier. ATF also stated that the agency is implementing protocols to ensure that deleting purchaser names from MS aligns with ATF policy. If such protocols can be consistently implemented in future years, and address both the timeliness of deletions and the comprehensive identification of records for deletion, they should meet the intent of our recommendation. On June 22, 2016, DOJ requested additional time for its Justice Management Division to review our conclusions regarding ATF’s compliance with the appropriations act restriction and the Antideficiency Act. As noted earlier, we solicited ATF’s interpretation of the restriction on consolidation or centralization of records as applied to each of the systems under review by letter of December 21, 2015, consistent with our standard procedures for the preparation of legal opinions. ATF responded to our inquiry on January 27, 2016, and its views are reflected in the report. Nevertheless, DOJ stated that ATF and DOJ officials had not followed DOJ’s own processes regarding potential violations of the Antideficiency Act, specifically promptly informing the Assistant Attorney General for Administration. As a result, DOJ requested additional time to review the appropriations law issues raised by the draft report. As explained in appendix VII, ATF’s failure to comply with the prohibition on the consolidation or centralization of firearms records violated the Antideficiency Act, which requires the agency head to submit a report to the President, Congress, and the Comptroller General. The Office of Management and Budget (OMB) has published requirements for executive agencies for reporting Antideficiency Act violations in Circular A-11, and has advised executive agencies to report violations found by GAO. OMB has further advised that “f the agency does not agree that a violation has occurred, the report to the President, Congress, and the Comptroller General will explain the agency’s position.” We believe that the process set forth by OMB affords DOJ the opportunity to consider and express its views. ATF also provided us written technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Deputy Director of ATF, the Attorney General of the United States, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Diana C. Maurer at (202) 512-9627 or maurerd@gao.gov, or Helen T. Desaulniers at (202) 512-4740 or desaulniersh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. This report addresses the following objectives: 1. Identify the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) data systems that contain retail firearms purchaser data and describe the characteristics of selected systems. 2. Determine whether selected ATF data systems comply with the appropriations act restriction on consolidation or centralization of firearms records and ATF policies. To calculate the estimated number of firearms in the United States in 2013, we used data from ATF’s February 2000 report on Commerce in Firearms in the United States and ATF’s 2015 Annual Statistical Update to this report. To calculate the approximate number of murders in which firearms were involved in 2014, we used data from the Federal Bureau of Investigation’s Uniform Crime Reports from 2014. To address the first objective, we reviewed ATF policy and program documents to identify ATF data systems related to firearms. For the purposes of this report, “data systems” or “systems” refers to ATF’s data systems and system components, including what ATF refers to as “modules” of a larger system, and what ATF refers to as “programs” whose associated data are contained within related systems. These policy and program documents included, among other things, ATF orders, system descriptions, system user manuals, system training materials, and data submission forms. We compared this information to the systems identified in our September 1996 report, and conducted searches of publicly available information to develop a comprehensive and current list of systems. In order to identify the systems and better understand them and their contents, we spoke with ATF officials in headquarters and at ATF’s National Tracing Center (NTC). We also discussed these systems with ATF investigative and regulatory officials in the Baltimore and Los Angeles field offices, who provided varying perspectives due to geographic factors. These actions enabled us to confirm a comprehensive list of systems, and determine the presence of retail purchaser information within these systems. We selected four systems for a more in-depth review: Out-of-Business Records Imaging System (OBRIS), Access 2000 (A2K), Firearm Recovery Notification Program (FRNP), and Multiple Sales (MS). Selected systems, at a minimum, contained retail purchaser information and contained original records—as opposed to systems that transmitted information, such as a system that only pulls data from another system in order to print a report or fill out a form. A system was more likely to be selected if (1) it contained data unrelated to a criminal investigation, (2) a large percentage of system records contained retail purchaser information, (3) the retail purchaser information was searchable, or (4) ATF initiated the system—as opposed to ATF being statutorily required to maintain the system. See table 2 for more details. For the selected systems, we reviewed ATF data on the number of system records, among other things—for OBRIS and A2K for fiscal year 2015, and for FRNP and MS from fiscal years 2010 through 2015. We assessed the reliability of these data by interviewing ATF staff responsible for managing the data and reviewing relevant documentation, and concluded that these data were sufficiently reliable for the purposes of our report. We reviewed ATF policy and program documents to obtain in-depth descriptions of these selected systems, and discussed these systems with ATF officials. We visited NTC to observe the selected systems in operation. To address the second objective, we reviewed relevant laws, including statutory data restrictions, and ATF policy and program documents relating to ATF’s firearms tracing operations and the selected systems. We also solicited the agency’s interpretation of the restriction on consolidation or centralization of records as applied to each of the systems, and interviewed ATF officials regarding the data systems’ compliance with that restriction and ATF policies. We visited NTC to observe how selected systems’ data are collected, used, and stored. For OBRIS, A2K, FRNP, and MS, we observed NTC analysts using the systems during firearms traces and observed the extent to which the systems are searchable for retail purchaser information. For OBRIS, FRNP, and MS, we observed NTC analysts receiving and entering data into the systems and processing the original data submissions—either electronically or through scanning and saving documents—including quality-control checks. For A2K, we reviewed budgetary information to determine the source of funding for the system for fiscal year 2008 through fiscal year 2014. We also interviewed representatives from the contractor that manages A2K, and 3 of 35 industry members that use A2K, to better understand how the system functions. We selected industry members that had several years of experience using A2K and reflected variation in federal firearms licensee (FFL) size and type. Although our interviews with these industry members are not generalizable, they provided us with insight on the firearms industry’s use of A2K. In order to evaluate the contents of FRNP for the presence of retail purchaser information and compliance with the appropriations act restriction and FRNP policies, we reviewed several fields of data for the entire population of records. During our site visit, we also reviewed additional fields of data for a generalizable sample of records and the associated submission forms that are used to populate the records. For this sample, we compared selected data in the system to information on the forms, and collected information from the forms. We drew a stratified random probability sample of 434 records from a total population of 41,625 FRNP records entered from June 1991 through July 2015. With this probability sample, each member of the study population had a nonzero probability of being included, and that probability could be computed for any member. We stratified the population by active/inactive record status and new/old (based on a cutoff of Nov. 1, 2004). Each sample element was subsequently weighted in the analysis to account statistically for all the records, including those that were not selected. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. All percentage estimates from the review of the generalizable sample of FRNP records have margins of error at the 95 percent confidence level of plus or minus 5 percentage points or less, unless otherwise noted. For our review of the submission forms associated with FRNP records, we reviewed 195 forms entered into FRNP from November 2004 through July 2015 that were sampled from the “new” stratum. Prior to November 2004, the submission forms did not include selection options for criteria for entry into FRNP. We therefore only reviewed the more recent forms in order to assess the presence of criteria on these forms. Our review of these forms is generalizable to submission forms entered into FRNP from November 2004 through July 2015. All percentage estimates from the review of submission forms have margins of error at the 95 percent confidence level of plus or minus 3 percentage points or less, unless otherwise noted. We assessed the reliability of the FRNP data by conducting electronic tests of the data for obvious errors and anomalies, interviewing staff responsible for managing the data, and reviewing relevant documentation, and concluded that these data were sufficiently reliable for the purposes of our report. For MS, we observed the process of querying to identify particular records. We determined the selected data systems’ compliance with the appropriations act restriction, and compared them to multiple ATF policies on collection and maintenance of information, and criteria in Standards for Internal Control in the Federal Government related to control activities for communication and for the access to and design of information systems. We conducted this performance audit from January 2015 to June 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Data sources FFLs send reports to NTC on a specified form (ATF Form 3310.4) Contents related to firearms purchaser information Firearms information (e.g., serial number, model), retail purchaser information (e.g., name, date of birth); FFL information (e.g., FFL name, FFL number) Who can view the information About 396 ATF Firearms Tracing System (FTS) users, primarily NTC personnel, and the 3,050 ATF users, which includes ATF agents. ATF eTrace users outside of NTC are generally to be limited to viewing firearms and requesting agent information. Exports information to eTrace; FIRES; FTS (Data related to MS are contained in FTS.) Out-of-business FFLs send firearms transaction records to NTC, specifically acquisition and disposition logbooks and a specified form (ATF Form 4473) Contents related to firearms purchaser information Retail purchaser information of prohibited individuals who attempted to purchase a firearm (e.g., name); firearms information (e.g., serial number, model) Imports information from Federal Licensing System (FLS) National Tracing Center (NTC) Firearms information (e.g., serial number, model), retail purchaser information (e.g., name, date of birth); federal firearms licensee (FFL) information (e.g., FFL name, FFL number) ATF employees; federal, state, local, and foreign law enforcement agencies. Non-ATF users have access to information on their own trace requests and those from agencies with which they have a memorandum of understanding. Firearms information (e.g., serial number, model), retail purchaser information (e.g., name, address); FFL information (e.g., FFL name, FFL number) Firearms information (e.g., serial number, model); retail purchaser, possessor, and associates information (e.g., first and last name); FFL information (e.g., city and state) Contents related to firearms purchaser information Firearms information (e.g., serial number, model), retail purchaser information (e.g., name, date of birth); FFL information (e.g., FFL name, FFL number) Firearms information (e.g., serial number, model), retail purchaser information (e.g. name); FFL information (e.g., FFL name, FFL number) eTrace; FIRES; FTS (Data related to Interstate Theft are contained in FTS.) Firearms information (e.g., serial number, model), retail purchaser information (e.g., name, date of birth); FFL information (e.g., FFL name, FFL number). Original and subsequent purchasers are maintained as part of the system. FLS; National Firearms Act Special Occupational Tax System (NSOT) Contents related to firearms purchaser information Firearms information (e.g., serial number, model). Firearms possessor information—limited to first, middle, and last name—but that information is not searchable. Firearms information (e.g., serial number, model); personal information for individuals including possessors, legal owners, or individuals who recovered the firearm (e.g., first and last name) Collects information related to an individual currently under active criminal investigation who is suspected of illegally using or trafficking firearms. Suspect information (e.g., name, identification numbers such as driver’s license number) Contents related to firearms purchaser information Firearms information (e.g., serial number, model), retail purchaser information (e.g., name, date of birth); FFL information (e.g., FFL name, FFL number) Who can view the information ATF employees; federal, state, local, and foreign law enforcement agencies. Federal, state, local, and foreign law enforcement agencies only have access to information on their own trace requests and those from agencies with which they have a memorandum of understanding. Exports information to Electronic Trace Operation Workflow Reporting System; eTrace; FIRES; FTS (Data related to Trace are contained in FTS.) Under the Brady Handgun Violence Prevention Act, Pub. L. No. 103-159, 107 Stat. 1536 (1993), and implementing regulations, the Federal Bureau of Investigation, within DOJ, and designated state and local criminal justice agencies use NICS to conduct background checks on individuals seeking to purchase firearms from FFLs or obtain permits to possess, acquire, or carry firearms. NICS was established in 1998. FTS does not contain original records, rather it imports data from its subsystems in order to conduct analysis. NFRTR contains firearms purchaser information pursuant to Title 26 of the IRS code, 26 U.S.C. Chapter 53, regarding the registration and transfers of registration taxes. Specifically, it states that there should be a central registry, called the National Firearms Registration and Transfer Record, of all firearms as defined in the code, including machine guns, destructive devices such as bazookas and mortars, and “other” “gadget-type” weapons such as firearms made to resemble pens. Appendix III: Out-of-Business Records Imaging System (OBRIS) Since 1968, the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) has received several hundred million out-of-business records. According to ATF officials, as of May 5, 2016 there are about 8,060 boxes of paper records at the National Tracing Center (NTC) awaiting scanning into digital images before they are to be destroyed. At NTC, we observed these boxes lining the walls and stacked along cubicles and file cabinets, as shown in figure 4. The officials stated that, according to the General Services Administration, the facility floor will collapse if the number of boxes in the building increases to 10,000. Therefore, when the number of boxes approaches this quantity, NTC staff move the boxes to large shipping containers outside. Currently, there are three containers of boxes on the property, which contain records awaiting destruction. Prior to digital imaging, records were housed on microfilm or in storage boxes, and the system was referred to simply as Microfilm Retrieval System. According to NTC officials, ATF is transitioning to digital imaging because of the benefits of improved image resolution, speed in accessing images, simultaneous accessibility of images to complete urgent traces, and less voluminous storage. The digitized records also helped mitigate the challenges of deteriorating microfilm images and maintaining the obsolete technology of microfilm. According to officials, NTC has completed the process of converting the microfilm records to digital images, and officials expect that the images will become fully available to NTC analysts for tracing during fiscal year 2016. Currently, access is limited to a single workstation within NTC. While ATF finalizes this effort, staff continue to access the records in the NTC microfilm archive in order to respond to trace requests, as shown in figure 5. Before fiscal year 1991, ATF stored the out-of-business records in boxes with an NTC file number assigned to each federal firearms licensee (FFL). If, during a trace, ATF determined that the FFL who sold the firearm was out of business and had sent in its records, ATF employees were to locate the boxes containing the records and manually search them for the appropriate serial number. According to ATF, this was a time-consuming and labor-intensive process, which also created storage problems. In 1991, ATF began a major project to microfilm the out-of- business records and destroy the originals. Instead of in boxes, the out- of-business records were stored on microfilm cartridges, with the FFL numbers assigned to them. Although this system occupied much less space than the hard copies of the records, ATF officials said it was still time-consuming to conduct firearms traces because employees had to examine up to 3,000 images on each microfilm cartridge to locate a record. The officials stated that scanning records and creating digital images in OBRIS has sped up the ability to search for out-of-business records during a trace. According to the officials, it takes roughly 20 minutes to complete a trace with digital images and roughly 45 minutes using microfilm. A provision in the fiscal year 2012 appropriation for the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) prohibits the use of the appropriation to consolidate or centralize records on the acquisition and disposition of firearms maintained by federal firearms licensees (FFL). This statutory restriction originated in the agency’s appropriation for fiscal year 1979 and, with some modification, was made permanent in fiscal year 2012. We reviewed whether ATF’s collection and maintenance of acquisition and disposition records in four data systems—Out-of-Business Records Imaging System (OBRIS), Access 2000 (A2K), Firearm Recovery Notification Program (FRNP), and Multiple Sales (MS)— violated this restriction. As discussed below, we considered the critical characteristics of each data system and related ATF activities in light of the restriction and in the context of ATF’s statutory authorities. We conclude that ATF violated the restriction when it collected and maintained the disposition records of FFL participants in A2K on a single server within the National Tracing Center (NTC) after those FFLs had discontinued their operations. We also agree with ATF’s 2009 determination that the agency violated the restriction when it collected and maintained records of certain FFLs engaged primarily in the sale of used firearms as part of FRNP. ATF’s failure to comply with the restriction on consolidation or centralization also violated the Antideficiency Act. Under section 1351 of title 31, United States Code, the agency is required to report these violations to the President and Congress. ATF, a criminal and regulatory enforcement agency within the Department of Justice (DOJ), is responsible for the regulation of the firearms industry and enforcement of federal statutes regarding firearms, including criminal statutes related to the illegal possession, use, transfer, or trafficking of firearms. One component of ATF’s criminal enforcement mission involves the tracing of firearms used in crimes to identify the first retail purchaser of a firearm from an FFL. To conduct a trace, the requesting law enforcement agency must identify the manufacturer or importer of the firearm and its type, caliber, and serial number, as well as other information related to the recovery, crime, and possessor. According to ATF, NTC personnel must typically use the information provided by the law enforcement agency to contact the manufacturer or importer to determine when and to whom the firearm in question was sold. The manufacturer or importer may have sold the firearm to an FFL wholesaler. In that case, NTC personnel would contact the FFL wholesaler to determine when and to whom the firearm in question was sold, usually to an FFL retailer. The tracing process continues until NTC identifies the first retail purchaser who is a nonlicensee. The Gun Control Act of 1968, as amended, established a system requiring FFLs to record firearms transactions, maintain that information at their business premises, and make such records available to ATF for inspection and search under certain prescribed circumstances. This system was intended to permit law enforcement officials to trace firearms involved in crimes as described above while allowing the records themselves to be maintained by the FFLs rather than by a governmental entity. As originally enacted, the Gun Control Act required FFLs to submit such reports and information as the Secretary of the Treasury prescribed by regulation and authorized the Secretary to prescribe such rules and regulations as deemed reasonably necessary to carry out the provisions of the act. In 1978, citing the general authorities contained in the Gun Control Act, ATF proposed regulations that would have required FFLs to report most of their firearms transactions to ATF through quarterly reports. Under the proposed regulations, these FFL reports of sales and other dispositions would not have identified a nonlicensed transferee, such as a retail purchaser, by name and address. However, the proposed regulations prompted concerns from those who believed that the reporting requirements would lead to the establishment of a system of firearms registration. Congress included in ATF’s fiscal year 1979 appropriation for salaries and expenses a provision prohibiting the use of funds for administrative expenses for the consolidation or centralization of certain FFL records, or the final issuance of the 1978 proposed regulations. The provision continues to apply, with some modifications as described below. hat no funds appropriated herein shall be available for administrative expenses in connection with consolidating or centralizing within the Department of the Treasury the records of receipt and disposition of firearms maintained by Federal firearms licensees or for issuing or carrying out any provisions of the proposed rules of the Department of the Treasury, Bureau of Alcohol, Tobacco and Firearms, on Firearms Regulations, as published in the Federal Register, volume 43, number 55, of March 21, 1978. The Bureau of Alcohol, Tobacco, and Firearms (BATF) has proposed implementation of several new regulations regarding firearms. The proposed regulations, as published in the Federal Register of March 21, 1978 would require: (1) A unique serial number on each gun manufactured or imported into the United States. (2) Reporting of all thefts and losses of guns by manufacturers, wholesalers and dealers. (3) Reporting of all commercial transactions involving guns between manufacturers, wholesalers and dealers. The Bureau would establish a centralized computer data bank to store the above information. It is important to note that the proposed regulations would create a central Federal computer record of commercial transactions involving all firearms—whether shotguns, rifles, or handguns. There are approximately 168,000 federally licensed firearms dealers, manufacturers, and importers. It is estimated that the proposed regulations would require submission of 700,000 reports annually involving 25 million to 45 million transactions. It is the view of the Committee that the proposed regulations go beyond the intent of Congress when it passed the Gun Control Act of 1968. It would appear that BATF and the Department of Treasury are attempting to exceed their statutory authority and accomplish by regulation that which Congress has declined to legislate. The reference to the 1978 proposed rules was removed from the annual provision as of the fiscal year 1994 appropriations act, but the prohibition against using funds for administrative expenses for consolidating or centralizing records was included in each of ATF’s annual appropriations through fiscal year 2012 in much the same form. In fiscal year 1994, the Treasury, Postal Service, and General Government Appropriations Act, 1994, expanded the prohibition to include the consolidation or centralization of portions of records and to apply to the use of funds for salaries as well as administrative expenses, stating “hat no funds appropriated herein shall be available for salaries or administrative expenses in connection with consolidating or centralizing, within the Department of the Treasury, the records, or any portion thereof, of acquisition and disposition of firearms maintained by Federal firearms licensees” (emphasis added). “hat no funds appropriated herein or hereafter shall be available for salaries or administrative expenses in connection with consolidating or centralizing, within the Department of Justice, the records, or any portion thereof, of acquisition and disposition of firearms maintained by Federal firearms licensees” (emphasis added). The conference report accompanying the act explained that the provision had been made permanent. We previously considered ATF’s compliance with the restriction on consolidation or centralization in 1996 in connection with the agency’s Microfilm Retrieval System and Multiple Sales System. We stated that the restriction did not preclude all information practices and data systems that involved an element of consolidation or centralization, but that it had to be interpreted in light of its purpose and in the context of other statutory provisions governing ATF’s acquisition and use of information on firearms. In this respect, our analyses reflected the well-established principle that statutory provisions should be construed harmoniously so as to give them maximum effect whenever possible, avoiding the conclusion that one statute implicitly repealed another in the absence of clear evidence to the contrary. We found that the two systems complied with the statutory restriction on the grounds that ATF’s consolidation of records was incident to carrying out specific responsibilities set forth in the Gun Control Act of 1968, as amended, and that the systems did not aggregate data on firearms transactions in a manner that went beyond these purposes. Thus, our analysis did not turn on the presence or absence of retail purchaser information in the system, but rather on the extent to which the aggregation of data corresponded to a statutory purpose. We employ a similar analytical approach, which ATF has also adopted, in assessing the four systems under review here, taking into account ATF’s statutory authorities and the critical characteristics of each system. Two of the four data systems we reviewed—OBRIS and MS—do not consolidate or centralize firearms in violation of the restriction contained in the fiscal year 2012 appropriations act. In contrast, ATF violated the restriction when it collected and maintained disposition records of FFL participants in A2K on a single server at NTC after they had discontinued their operations. ATF also violated the restriction when it collected and maintained records of certain FFLs engaged primarily in the sale of used firearms as part of FRNP. OBRIS is ATF’s repository for records submitted by FFLs that have permanently discontinued their operations, as required by the Gun Control Act of 1968, as amended. Section 923(g)(1)(A) of title 18, United States Code, requires each FFL to maintain such records of importation, production, shipment, receipt, sale, or other disposition of firearms at its place of business as prescribed by the Attorney General. Under 18 U.S.C. § 923(g)(4), when a firearms business is discontinued and there is no successor, the records required to be maintained by FFLs must be delivered within 30 days to ATF. ATF’s system for maintaining the records of out-of-business FFLs for its statutory tracing function has evolved over time in response to logistical challenges and technological advances. Prior to fiscal year 1991, ATF maintained out-of-business FFLs’ records in hard copy, with a file number assigned to each FFL. During a trace, if ATF determined that a firearm had been transferred or disposed of by an out-of-business FFL, ATF employees manually searched the FFL’s records until they found the records corresponding to the serial number of the firearm being traced. According to ATF, this was a time-consuming and labor-intensive process, and the volume of records created storage problems. In 1991, ATF began a major project to microfilm these records and destroy the originals. For fiscal year 1992, Congress appropriated $650,000 “solely for improvement of information retrieval systems at the National Firearms Tracing Center.” In fiscal year 1992, ATF began creating a computerized index of the microfilmed records containing the information necessary to identify whether ATF had a record relating to a firearm being traced. The index contained the following information: (1) the cartridge number of the microfilm; (2) an index number; (3) the serial number of the firearm; (4) the FFL number; and (5) the type of document on microfilm, i.e., a Firearms Transaction Record form or acquisition and disposition logbook pages. This information was stored on a database in ATF’s mainframe computer to allow searches. Other information, however, including a firearms purchaser’s name or other identifying information and the manufacturer, type, and model remained stored on microfilm cartridges and was not computerized. Therefore, this information was not accessible to ATF personnel through a text search. In our 1996 report, we concluded that the Microfilm Retrieval System did not violate the restriction on consolidation or centralization due to its statutory underpinnings and design. ATF had initially required out-of- business FFLs to deliver their records to ATF through a 1968 regulation. We found no indication in its legislative history that the appropriations act restriction was intended to overturn this regulation and noted that, historically, out-of-business records had been maintained at a central location. We also explained that the Firearms Owners’ Protection Act of 1986 (FOPA) had codified the ATF regulation, affirming the agency’s authority to collect this information, and that a subsequent appropriations act had provided funding specifically for ATF’s microfilming effort. Finally, ATF’s system of microfilmed records did not capture and store certain key information, such as firearms purchaser information, in an automated file. In this regard, we found that the system did not aggregate information in a manner beyond that necessary to implement the Gun Control Act of 1968, as amended by FOPA. Conversion of Records.—The conferees recognize the need for the ATF to begin converting tens of thousands of existing records of out-of-business Federal firearms dealers from film to digital images at the National Tracing Center. Once the out-of- business records are fully converted, the search time for these records will be reduced to an average of 5 minutes per search from the current average of 45 minutes per search. This significant time saving will ultimately reduce overall costs and increase efficiency at the National Tracing Center. Therefore, the conference agreement includes a $4,200,000 increase for the ATF to hire additional contract personnel to begin this conversion. Similarly, the conference report accompanying the fiscal year 2006 appropriations act reflected the conferees’ support for ATF’s transition of out-of-business records to OBRIS. Since 2006, NTC has converted records submitted by FFLs discontinuing their operations to digital images in OBRIS. Specifically, NTC sorts and scans records provided by out-of-business FFLs, converting and storing them in an image repository on an electronic server. Images stored in OBRIS are generally indexed by FFL number. The records themselves are stored as images without optical character recognition so that they cannot be searched or retrieved using text queries, but must be searched through the index, generally by FFL number. After narrowing down the possible records through an index search, an NTC analyst must manually scroll through digital images to identify the record of the particular firearm in question. The technological changes represented by OBRIS do not compel a different conclusion regarding ATF’s compliance with the restriction on consolidation or centralization from the one we reached in 1996 with respect to the predecessor system. The statutory basis for OBRIS is the same as for the Microfilm Retrieval System and OBRIS makes records accessible to the same extent as that system, functioning in essentially the same manner though with enhanced technology. As with the prior microfilm system, users identify potentially relevant individual records through manual review after searching an index using an FFL number, or firearms information if available. In this regard, OBRIS, like its predecessor, does not aggregate records in a manner beyond that required to implement the Gun Control Act of 1968, as amended by FOPA. We assessed A2K with regard to in-business records and out-of-business records. We conclude that A2K for in-business records complies with the restriction on consolidation or centralization, while A2K for out-of- business records violated the restriction. The Gun Control Act of 1968, as amended, requires FFLs to provide firearms disposition information to ATF in response to a trace request. Specifically, section 923(g)(7) of title 18, United States Code, requires FFLs to respond within 24 hours to a request for records to determine the disposition of firearms in the course of a criminal investigation. Prior to the implementation of A2K, FFLs could only respond to such requests manually. A2K provides manufacturer, importer, and wholesaler FFLs with an automated alternative to facilitate their statutorily required response to ATF requests. he conferees are aware that the Access 2000 program was initiated by ATF to improve the efficiency and reduce the costs associated with firearms tracing incurred by Federal Firearms Licensees (FFLs). ATF and FFL importers, manufacturers, and wholesalers form a partnership in this effort. FFLs take their data from their mainframe computer and import it into a stand-alone server provided by the ATF. The National Tracing Center is connected to this server remotely by secure dial-up and obtains information on a firearm that is subject to a firearms trace. The conferees support this program, which reduces the administrative burdens of the FFL and allows the ATF around the clock access to the records. The ATF currently has 36 Access 2000 partners. The conferees encourage the ATF to place more emphasis on this program and expand the number of partners to the greatest extent possible. According to ATF, as of April 25, 2016, there are 35 industry members representing 66 individual manufacturer, importer, and wholesaler FFLs currently participating in A2K. ATF believes that A2K “… has appropriately balanced Congressional concerns related to the consolidation of firearm records with the necessity of being able to access firearm information in support of its underlying mission to enforce the Gun Control Act,” as amended. We agree. Given the statutory underpinning and features of the system for in-business FFLs, we conclude that ATF’s use of A2K for in-business records does not violate the restriction on the consolidation or centralization of firearms records. ATF’s use of A2K for in-business records is rooted in the specific statutory requirement that FFLs respond promptly to ATF trace requests in connection with criminal investigations. In addition, although the system allows FFLs to respond to ATF’s trace requests virtually, ATF obtains the same information as it would otherwise obtain by phone, fax, or e-mail and in similar disaggregated form, that is, through multiple servers located at individual FFLs. Moreover, industry members retain possession and control of their disposition records and, according to ATF officials, may withdraw from using A2K—and remove their records from the ATF- accessible servers—at any time. For these reasons, we do not view A2K for in-business records to constitute the type of data aggregation prohibited by the appropriations act restriction on the consolidation or centralization of records within DOJ. During the course of our review, we found that when participating industry members permanently discontinued their operations, the disposition data maintained in connection with A2K was transferred to ATF, and ATF used the data when conducting firearms traces. Specifically, when an A2K participant went out of business, an ATF contractor remotely transferred the data on the server to a backup disk and the industry member shipped the backup disk with intact disposition records, as well as the blank server, to ATF’s NTC. ATF officials placed the data from the backup disk on a single partitioned server at NTC and accessed the data for firearms traces using the same type of interface and URL as used while the industry member was in business. As a result, in response to an industry member–specific query using an exact firearm serial number, the A2K out-of-business server would automatically generate the disposition information related to that firearm serial number. According to ATF, records of eight industry members were placed on the server at NTC from as early as late 2000 through mid-2012. While ATF estimated that there were approximately 20 million records associated with these industry members on the server, the agency did not have a means of ascertaining the actual number of records. The number of records on the ATF server would have been expected to grow as additional A2K participants discontinued their operations and provided their backup disks to ATF. However, during the course of our review, ATF officials told us that the agency planned to move all of the A2K records into OBRIS and that, once converted to OBRIS images, the records would be searchable like other OBRIS records. In January 2016, ATF officials reported that NTC was in the process of transferring all of the records from the A2K out-of-business records server to OBRIS and a quality-control process was under way to verify the accuracy of the transfer. They subsequently deleted all records from the server in March 2016. We conclude that ATF’s use of A2K with respect to out-of-business records violated the restriction on consolidation or centralization. In contrast to the discrete servers in the possession of the in-business industry members, ATF combined disposition records across industry members on the single, though partitioned, A2K server at NTC. In addition, the records were stored on the single A2K server in a manner that made them more easily searchable than other out-of-business records. Unlike OBRIS, which requires the manual review of potentially relevant records identified through an index, the A2K server within NTC generated records automatically in response to an industry member– specific text query, that is, exact firearm serial number. In addition, according to NTC officials, they could have modified the structure of the NTC server to achieve further aggregation, by programming the system to allow text searches across a broader set of data fields. As a result, ATF could have searched for records by name or other personal identifier. As explained earlier, our analysis of ATF’s aggregation of firearms records turns not on the presence or absence of retail purchaser information, but rather on the extent to which the aggregation of data corresponds to a statutory purpose. ATF’s maintenance of out-of- business industry members’ disposition records on a single server at NTC was not incident to the implementation of a specific statutory requirement. As discussed above, A2K was designed to allow in-business industry members to respond promptly to ATF trace requests as required by 18 U.S.C. § 923(g)(7) without having to dedicate personnel to this function. Section 923(g)(7), however, has no applicability to FFLs once they discontinue operations. A separate statutory provision, 18 U.S.C. § 923(g)(4), applies to FFLs that permanently discontinue their operations. ATF has long maintained a separate system—formerly the Microfilm Retrieval System and currently OBRIS—to hold the records submitted under that provision, and the disposition records that ATF maintained on the NTC server were among the types of records required to be submitted under section 923(g)(4) for which ATF had created that system. Therefore, we find no statutory underpinning for ATF’s maintenance of out-of-business A2K participants’ disposition records on the server at NTC. Our implementation of A2K included strict security protocols to limit ATF access to only that information to which it is statutorily required, e.g., the next step in the distribution of the traced firearm. That is, ATF would simply have access to the same information it could obtain by calling the participating FFL. However, that calculus is altered when an FFL ceases participation in A2K. At that point, that FFL’s records become just like any other FFL records and, as such, must be stored in the same manner. Otherwise, records which were formerly accessible on a discrete basis under A2K would be readily accessible in a database which would, in our opinion based on the 1996 GAO Report, violate the appropriation rider. Our decision, therefore, was to ensure that A2K records have the same character and are retrievable in the same manner as any other out-of-business records. In addition to removing all data from the A2K out-of-business records server, ATF officials reported that, going forward, the agency plans to convert records of A2K participants that go out of business directly into OBRIS images. However, they said, when such records are received by out-of-business FFLs, the time frame for converting the records into OBRIS images will depend on the backlog of electronic records awaiting conversion. Similarly, ATF officials told us that they had anticipated that A2K participants would submit acquisition and disposition records together, consistent with the format provided for in ATF’s regulations, for inclusion in OBRIS. They had not expected that A2K participants would satisfy any part of their statutory responsibility by providing their backup disks to the agency. However, even if industry members’ submission of disposition data on the backup disks could be said to be in furtherance of the portion of the statutory requirement pertaining to disposition records, given the existence and successful functioning of OBRIS, we conclude that ATF’s maintenance of those records on the NTC server went beyond the purposes of the Gun Control Act of 1968, as amended. We conclude that FRNP complies with the restriction on consolidation and centralization of firearms records when used as a tool for ATF agents in connection with an ATF criminal investigation. However, ATF’s use of FRNP to maintain information on firearms identified during regulatory inspections of FFLs under the Southwest Border Secondary Market Weapons of Choice Program (SWBWOC), as discussed below, was a violation of the restriction. Under section 599A of title 28, United States Code, ATF is responsible for investigating criminal and regulatory violations of federal firearms laws, and for carrying out any other function related to the investigation of violent crime or domestic terrorism that is delegated to it by the Attorney General. Among other things, ATF is responsible for enforcing federal statutes regarding firearms, including those regarding illegal possession, use, transfer, or trafficking. FRNP, formerly known as the Suspect Gun Program, was established in 1991 within the Firearms Tracing System to provide an investigative service to ATF agents conducting criminal investigations. Through this program, ATF records information— manufacturer, serial number, and type—about firearms that have not yet been recovered by other law enforcement authorities, but are suspected of being involved in criminal activity and are associated with an ATF criminal investigation. When such firearms are recovered, ATF uses the information available through the program to notify the investigating ATF official and to coordinate the release of trace results to other law enforcement authorities with the ongoing ATF investigation. To enter firearms information into the system, ATF agents investigating potential criminal activity involving firearms must identify the firearms at issue, the number of an open ATF criminal investigation, and at least one of five specified criteria for using the system. The five criteria correspond to bases for ATF investigation. ATF agents also indicate on the submission form whether NTC should release trace results to requesters of a trace for the firearms listed on the form. Where criminal investigations are ongoing and FRNP records are designated as “active,” NTC will notify the investigating ATF agent when the firearm described on the form is recovered. In addition, where the ATF agent has indicated that NTC should release trace information, NTC will notify the ATF agent and the requesting law enforcement agency of trace results. Where the ATF agent has indicated that NTC should not release trace information, the ATF agent is notified of the trace results and determines when that information may be released to the requesting law enforcement agency. For criminal investigations that have been closed, the FRNP record associated with the investigation is labeled “inactive,” although the records may provide investigative leads, according to ATF officials. In such cases, the ATF agent associated with the investigation is not notified of the recovery of the identified firearms or related trace requests, and the release of trace results to requesting law enforcement agencies proceeds without any delay. ATF is authorized by statute to investigate violations of federal firearms laws. As described above, FRNP is designed for the limited purpose of facilitating ATF’s conduct of specific criminal investigations under its jurisdiction. The inclusion of data in FRNP requires an open ATF investigation of an identified criminal matter, which helps to ensure that the data are maintained only as needed to support this investigative purpose. Further, ATF requires its agents to identify with specificity the firearms relevant to the investigation. As we observed in 1996, the restriction on consolidation or centralization does not preclude all data systems that involve an element of consolidation. Where ATF adheres to the limitations incorporated in the design of FRNP, the maintenance of information through FRNP is incident to ATF’s exercise of its statutory authority to conduct criminal investigations and does not involve the aggregation of data in a manner that goes beyond that purpose. In this respect, we conclude that it does not represent a consolidation or centralization of records in violation of the statutory restriction. In response to our inquiries about FRNP data, ATF officials told us that in 2009, the ATF Chief Counsel had concluded that the agency had violated the appropriations restriction in connection with the system. Specifically, ATF officials told us that the agency had maintained records on the inventories of certain FFLs in violation of the restriction, from 2007 through 2009 under ATF’s Southwest Border Secondary Market Weapons of Choice (SWBWOC) Program. We agree with the ATF Chief Counsel’s conclusion that its collection and maintenance of information in connection with this program violated the restriction on the consolidation or centralization of firearms records. In October 2005, the governments of the United States and Mexico instituted a cooperative effort to address surging drug cartel–driven violence in Mexico and along the southwest border of the United States. ATF’s main role in this initiative was to develop strategies and programs to stem the illegal trafficking of firearms from the United States to Mexico. ATF determined that used gun sales—referred to in the industry as “secondary market” sales—played a significant role in firearms trafficking to Mexico, particularly for the types of firearms most sought by the Mexican drug cartels, known as “weapons of choice.” Accordingly, in June 2007, the agency developed a protocol to be used during its annual inspections of FFLs in the region engaged primarily in the sale of used firearms. This protocol, known as the SWBWOC Program was intended to enhance ATF’s ability to track secondary market sales. It called for ATF investigators to record the serial number and description of all used weapons of choice in each FFL’s inventory and those sold or otherwise disposed of during the period covered by the inspection. Under the protocol, the investigators forwarded the information to the relevant ATF field division, which opened a single investigative file for all submissions from the area under its jurisdiction and determined whether any of the weapons had been traced since their last retail sale. After review, the field division forwarded the information to FRNP. According to ATF, the Dallas, Houston, and Los Angeles Field Divisions began to submit records from the SWBWOC Program to FRNP in July 2007, and the Phoenix Field Division began to do so in October 2007. The SWBWOC Program was cancelled on October 2, 2009, following a review by ATF’s Office of Chief Counsel of the process by which the secondary market weapons of choice information had been recorded and submitted to FRNP. The Office of Chief Counsel determined that the SWBWOC Program was not consistent with the consolidation or centralization restriction. It advised that information obtained from an FFL about a firearm in and of itself and unaccompanied by purchaser information could not be collected and consolidated absent a specific basis in statute or regulation, or a direct nexus to discrete law enforcement purposes such as a specific criminal investigation. The Office of Chief Counsel found that the collection of information from FFLs under the SWBWOC Program lacked these essential, individualized characteristics. We agree with ATF’s conclusion that the collection and maintenance of firearms information from the SWBWOC Program in FRNP exceeded the permissible scope of the appropriations act restriction. As discussed above, our analysis of ATF’s aggregation of firearms data turns not on the presence or absence of retail purchaser information, but rather on the extent to which the aggregation of data corresponds to a statutory purpose. Here, ATF collected and maintained acquisition and disposition data without a statutory foundation based on nothing more than the characteristics of the firearms. The collection and maintenance of information about a category of firearms, “weapons of choice,” from a category of FFLs, primarily pawnbrokers, did not pertain to a specific criminal investigation within the scope of ATF’s statutory investigative authority. Nor did it fall within the scope of ATF’s authority to conduct regulatory inspections. For this reason, we conclude that the program involved the type of aggregation of information contemplated by Congress when it passed the restriction on the consolidation or centralization of firearms records. ATF deleted the related data from FRNP in March 2016. The Gun Control Act of 1968, as amended, requires FFLs to report transactions involving the sales of multiple firearms. Specifically, under 18 U.S.C. § 923(g)(3)(A), an FFL is required to report sales or other dispositions of two or more pistols or revolvers to a non-FFL at one time or during 5 consecutive business days. Under these circumstances, the FFL is required to report information about the firearms, such as type, serial number, manufacturer, and model, and the person acquiring the firearms, such as name, address, ethnicity, race, identification number, and type of identification to ATF. ATF enters data from these reports into the MS portion of its Firearms Tracing System so that it can monitor and deter illegal interstate commerce in pistols and revolvers. Our 1996 report examined the Multiple Sales System and found that it did not violate the prohibition on the consolidation or centralization of firearms records because the collection and maintenance of records was incident to a specific statutory responsibility. In connection with our current review, we observed the functioning of the present system for reports of multiple sales. We found no changes since 1996 that would suggest a different conclusion with respect to ATF’s compliance with the appropriations act restriction. As we reported in 1996, a regulatory requirement for FFLs to prepare and provide multiple sales reports to ATF existed before the prohibition on consolidation or centralization of firearms records was enacted in fiscal year 1979 and there was no indication in the legislative history that the prohibition was intended to overturn ATF’s existing practices with respect to multiple sales. In addition, we explained that the Firearms Owners’ Protection Act had codified the ATF regulation, affirming the agency’s authority to collect this information. FOPA’s requirement that FFLs send the reports “to the office specified” on an ATF form suggested that ATF could specify that the information be sent to a central location. Our review of FOPA’s legislative history confirmed our interpretation of the statute. When considering the passage of FOPA, Congress clearly considered placing constraints on ATF’s maintenance of multiple sales reports, but declined to do so. Specifically, the Senate-passed version of FOPA prohibited the Secretary of the Treasury from maintaining multiple sales reports at a centralized location and from entering them into a computer for storage or retrieval. This provision was not included in the version of the bill that was ultimately passed. In light of the above, we reach the same conclusion as we did in 1996 and find that ATF’s use of MS complies with the restriction on the consolidation or centralization of firearms records. In addition, ATF has collected and maintained information on the multiple sales of firearms under a separate authority, 18 U.S.C. § 923(g)(5)(A). Section 923(g)(5)(A) authorizes the Attorney General to require FFLs to submit information that they are required to maintain under the Gun Control Act of 1968, as amended. This provision was also included in FOPA. Relying on this authority, ATF issues “demand letters” requiring FFLs to provide ATF with specific information. In 2011, ATF issued a demand letter requiring certain FFLs in Arizona, California, New Mexico, and Texas to submit reports of multiple sales or other dispositions of particular types of semiautomatic rifles to non-FFLs (referred to as “Demand Letter 3” reports). These reports are submitted to ATF and included in the MS portion of its Firearms Tracing System. According to ATF, the information was intended to assist in its efforts to investigate and combat the illegal movement of firearms along and across the southwest border. Several FFLs challenged the legality of ATF’s demand letter, asserting, among other things, that it would create a national firearms registry in violation of the fiscal year 2012 appropriations act restriction. In each of the cases, the court placed ATF’s initiative in its statutory context and held that the appropriations act did not prohibit ATF’s issuance of the demand letter. Similar to our 1996 analyses of the Out-of-Business Records and Multiple Sales Systems, the United States Court of Appeals for the Fifth Circuit examined the enactment of ATF’s authority to issue demand letters in relation to the appropriations act restriction. The court observed that ATF’s demand letter authority was enacted as part of FOPA and that because FOPA “clearly contemplate ATF’s collection of some firearms records,” the appropriations provision did not prohibit “any collection of firearms transaction records.” In this regard, the court further noted that the plain meaning of “consolidating or centralizing” did not prohibit the collection of a limited amount of information. Other courts also emphasized that the ATF 2011 demand letter required FFLs to provide only a limited subset of the information that they were required to maintain, as opposed to the substantial amount of information that they believed would characterize a “consolidation or centralization.” For example, the Court of Appeals for the District of Columbia Circuit enumerated the limitations on ATF’s 2011 collection of information, noting that it applied to (1) FFLs in four states; (2) who are licensed dealers and pawnbrokers; (3) and who sell two or more rifles of a specific type; (4) to the same person; (5) in a 5-business-day period. The court found that because ATF sent the demand letter to a limited number of FFLs nationwide and required information on only a small number of transactions, “the . . . demand letter does not come close to creating a ‘national firearms registry.’” In light of the court decisions regarding ATF’s exercise of its statutory authority in this context, we conclude that the Demand Letter 3 initiative does not violate the restriction on the consolidation or centralization of firearms records. Two of the data systems under review, OBRIS and MS, comply with the provision in ATF’s fiscal year 2012 appropriation prohibiting the use of funds for the consolidation or centralization of firearms records. ATF collects and maintains firearms transaction information in each system incident to the implementation of specific statutory authority and it does not exceed those statutory purposes. ATF’s A2K system for in-business FFLs and its maintenance of certain firearms information pertinent to criminal investigations in FRNP are likewise consistent with the appropriations act restriction. However, ATF’s collection and maintenance of out-of-business A2K records on the server at NTC violated the restriction, as did its collection and maintenance of data from certain FFLs as part of the SWBWOC Program. In both cases, ATF’s aggregation of information was not supported by any statutory purpose. ATF’s failure to comply with the prohibition on the consolidation or centralization of firearms records also violated the Antideficiency Act. The Antideficiency Act prohibits making or authorizing an expenditure or obligation that exceeds available budget authority. As a result of the statutory prohibition, ATF had no appropriation available for the salaries or administrative expenses of consolidating or centralizing records, or portions of records, of the acquisition and disposition of firearms in connection with the SWBWOC Program or A2K for out-of-business records. The Antideficiency Act requires that the agency head “shall report immediately to the President and Congress all relevant facts and a statement of actions taken.” In addition, the agency must send a copy of the report to the Comptroller General on the same date it transmits the report to the President and Congress. In addition to the contact named above, Dawn Locke (Assistant Director) and Rebecca Kuhlmann Taylor (Analyst-in-Charge) managed this work. In addition, Willie Commons III, Susan Czachor, Michele Fejfar, Justin Fisher, Farrah Graham, Melissa Hargy, Jan Montgomery, and Michelle Serfass made significant contributions to the report. Also contributing to this report were Dominick M. Dale, Juan R. Gobel, Eric D. Hauswirth, Ramon J. Rodriguez, and Eric Winter. | ATF is responsible for enforcing certain criminal statutes related to firearms, and must balance its role in combatting the illegal use of firearms with protecting the privacy rights of law-abiding gun owners. As part of this balance, FFLs are required to maintain firearms transaction records, while ATF has the statutory authority to obtain these records under certain circumstances. ATF must also comply with an appropriations act provision that restricts the agency from using appropriated funds to consolidate or centralize FFL records. GAO was asked to review ATF's compliance with this restriction. This report (1) identifies the ATF data systems that contain retail firearms purchaser data and (2) determines whether selected ATF data systems comply with the appropriations act restriction and adhere to ATF policies. GAO reviewed ATF policy and program documents, observed use of data systems at NTC, reviewed a generalizable sample of one system's records, and interviewed ATF officials at headquarters and NTC. To carry out its criminal and regulatory enforcement responsibilities, the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) has 25 firearms-related data systems, 16 of which contain retail firearms purchaser information from a federal firearms licensee (FFL)—such as firearms importers and retailers. GAO selected 4 systems for review that are used in the firearms tracing process, based on factors such as the inclusion of retail purchaser information and original data. The Out-of-Business Records Imaging System (OBRIS) stores nonsearchable images of firearms records from out-of-business FFLs. Such FFLs are required by law to provide their records to ATF. Access 2000 (A2K) provides servers for National Tracing Center (NTC) personnel to electronically search participating FFLs' records at their premises for firearms disposition information during a trace. The Firearm Recovery Notification Program (FRNP) maintains information on firearms that have not yet been recovered by law enforcement, but are suspected of being involved in criminal activity and are associated with an ATF criminal investigation. Multiple Sales (MS) includes firearms information from multiple sales reports. FFLs are required by law to report to ATF sales of two or more revolvers or pistols during 5 consecutive business days. ATF policy requires that certain information in MS be deleted after 2 years if the firearm has not been connected to a trace. Of the 4 data systems, 2 fully comply and 2 did not always comply with the appropriations act restriction prohibiting consolidation or centralization of FFL records. ATF addressed these compliance issues during the course of GAO's review. ATF also does not consistently adhere to its policies. Specifically: OBRIS complies with the restriction and adheres to policy. A2K for in-business FFL records complies with the restriction. A2K for out-of-business FFL records did not comply with the restriction because ATF maintained these data on a single server at ATF. Thus, ATF deleted the records in March 2016. In addition, ATF policy does not specify how, if at all, FFLs may use A2K records to meet out-of-business record submission requirements. Such guidance would help ensure they submit such records. FRNP generally complies with the restriction. However, a 2007 through 2009 program using FRNP did not comply. ATF cancelled this program in 2009 and deleted the related data in March 2016. Also, a technical defect allows ATF agents to access FRNP data—including purchaser data—beyond what ATF policy permits. Aligning system capability with ATF policy would ensure that firearms purchaser data are only provided to those with a need to know. MS complies with the restriction, but ATF inconsistently adheres to its policy when deleting MS records. Specifically, until May 2016, MS contained over 10,000 names that were not consistently deleted within the required 2 years. Aligning the MS deletion policy with the timing of deletions could help ATF maintain only useful MS purchaser data and safeguard privacy. GAO recommends that ATF provide guidance to FFLs participating in A2K on the provision of records to ATF when they go out of business; align system capability with ATF policy to limit access to FRNP firearms purchaser information for ATF agents; and align timing and ATF policy for deleting MS records. ATF concurred with our recommendations. |
The U.S. commercial aviation industry, with less than one fatal accident per 5 million flights from 2002 through 2005 has an extraordinary safety record. However, when passenger airlines have accidents or serious incidents, regardless of their rarity, the consequences can be tragic. In addition, according to Bureau of Transportation Statistics data, flight arrival delays have increased from 15 percent in 2003 to 22 percent in 2006. Increases in flight delays can be viewed as evidence of strain in the aviation system, as a loss of efficiency in the air system is a symptom of increased strain. Losses of efficiency and the corresponding strain on the system could potentially result in hazards that decrease safety. In order to maintain a high level of aviation safety, it is critical to have well- established, efficient, and effective systems in place to provide an early warning of hazards that can lead to accidents. FAA has established a number of systems and processes to inspect and oversee various aspects of passenger airline safety, such as aircraft maintenance and flight operations. In 1998, the agency implemented the Air Transportation Oversight System (ATOS), which currently oversees 35 commercial airlines and cargo carriers; the goal is for ATOS to oversee all commercial passenger and cargo airlines. ATOS emphasizes a system safety approach that extends beyond periodically checking airlines for compliance with regulations to using technical and managerial skills to identify, analyze, and control hazards and risks. For example, under ATOS, inspectors develop surveillance plans for each airline, based on data analysis and risk assessment, and adjust the plans periodically based on inspection results. Our review of ATOS’s early implementation found weaknesses, which FAA addressed by improving guidance to inspectors and increasing data usefulness. FAA’s inspection process for the 81 commercial airlines not covered by ATOS has two components. The National Work Program Guidelines (NPG) is the original oversight program for these airlines. Under NPG, an FAA- wide committee of managers identifies an annual minimum set of required inspections to ensure that airlines comply with their operating certificates; this process is not risk-based. In 2002, FAA added another component, the Surveillance and Evaluation Program (SEP), to the inspection process to incorporate principles of ATOS into its oversight of commercial airlines. The two components are used together to establish the number and types of annual inspections for airlines. Inspections can encompass many different activities, such as visually spot-checking an airplane at a gate, monitoring procedures on a scheduled flight, or observing maintenance performed on an aircraft. Each year, FAA headquarters establishes a baseline number and type of inspections for each airline through NPG. Through SEP, teams of FAA inspectors analyze the results of an airline’s prior inspections at periodic meetings and, based on their assessment of specific risks, establish other inspections that may be needed. Since 1990, FAA has emphasized industry partnership programs that allow participants, such as airlines and pilots, to self-report violations of safety regulations and help identify safety deficiencies and potentially mitigate or avoid fines or other legal action. For example, the Voluntary Disclosure Program encourages the self-reporting of manufacturing problems and safety incidents by participants that can include air carriers and repair stations. When violations of statutory and regulatory requirements are identified through inspections, partnership programs, or other methods, FAA has a variety of enforcement tools that it may use to respond to the violations, including administrative actions (such as issuing a warning notice or a letter of correction that includes the corrective actions the violator is to take) and legal sanctions (such as levying a fine or suspending or revoking a pilot’s certificate or other FAA-issued certificate). The achievement of FAA’s mission is dependent in large part on the skills and expertise of its workforce, whose aviation safety activities include air traffic control, maintenance of air traffic control equipment, and certification and inspection of various industry participants. As of 2006, 714 of FAA’s approximately 3,400 inspectors were dedicated to overseeing the 35 airlines in ATOS. Approximately 1,100 inspectors oversee other entities and individuals, including the remaining 81 commercial airlines that are included in the SEP inspection program, about 5,200 aircraft repair stations, and approximately 625,000 pilots. FAA’s safety oversight programs for other aspects of the aviation industry—including manufacturers of aircraft and aircraft parts, repair stations, flight schools, aviation maintenance technician schools, pilots, and mechanics—involve certification, surveillance, and inspection by FAA’s safety inspectors, engineers, flight surgeons, and designated representatives. FAA authorizes about 13,400 private individuals and 218 organizations (called “designees”) to act as its representatives to conduct many safety certification activities that FAA considers to be nonsafety critical, such as administering flight tests to pilots, inspecting repair work by maintenance facilities, conducting medical examinations of pilots, and approving designs for aircraft parts. These designees are grouped into 18 different programs and are overseen by three FAA offices—Flight Standards Service, Aerospace Medicine, and Aircraft Certification Service—all of which are under the Office of Aviation Safety. In addition, FAA’s Air Traffic Organization (ATO) includes the approximately 16,700 air traffic controller workforce and nearly 7,200 field maintenance technicians responsible for maintaining ATO’s equipment and facilities, which include 21 air traffic control centers, 518 airport control towers, and 76 flight service facilities. While overall commercial aviation safety trends have been generally positive over the last several years, recent safety trends may warrant scrutiny. On the positive side, the number of serious runway incursions has decreased since fiscal year 2002. Specifically, in fiscal year 2002, there were 37 serious runway incursions, compared with 29 in fiscal year 2005. Recent fiscal year 2006 data also continue the downward trend, with 25 serious runway incursions as of August 1, 2006—fewer than at the same time in the previous fiscal year. However, with four fatal accidents in fiscal year 2006, FAA will not meet its performance target for fiscal year 2006 for commercial air carrier safety. Although general aviation accidents have decreased from 1,715 in 2002 to 1,669 in 2005, general aviation safety continues to be a concern because it represents a significant number of fatal accidents every year. (See fig. 1.) For example, 321 of the 1,669 general aviation accidents in 2005 were fatal. Additionally, the poorer safety records of cargo and air ambulances services, compared with the commercial passenger airline accident rate, point out the safety vulnerabilities in this area. According to FAA, from 1998 through 2005, the accident rate for scheduled air cargo operators declined significantly, but was still about 2.5 times higher than the accident rate for scheduled passenger operators. Further, in instances where there was not an isolated injury to a single individual, the accident rate for cargo was about 6.3 times higher than for commercial passenger aviation. In addition, from January 2002 to January 2005, there were 55 emergency medical services or air ambulance accidents, with 54 fatalities, the highest number of accidents since the 1980s. In addition, FAA did not meet its performance target with regard to operational errors for fiscal years 2003 through 2005. While operational errors continued an upward trend in 2006, FAA was below the fiscal year 2006 target of 4.27 operational errors per million activities as of June 2006. FAA’s safety oversight system has programs that focus on identifying and mitigating risk through a system safety approach, leveraging resources, and enforcing safety regulations, but the programs lack fully developed evaluative processes. As mentioned previously, FAA oversees commercial airlines by one of two programs—ATOS, which includes 35 airlines, and SEP, which includes the remaining 81 airlines. Both programs emphasize a system safety approach of using risk analysis techniques, which allow for the efficient use of inspection staff and resources by prioritizing workload based on areas of highest risk and require that inspectors verify that corrective actions are taken. For example, FAA has developed risk assessment worksheets for both programs that guide inspectors through identifying and prioritizing risks associated with key airline areas, such as flight operations and personnel training. Information from the worksheets is then used to target resources to mitigating those risks. In recent work we found that the benefits of FAA’s system safety approach for the inspection of airlines covered under SEP could be enhanced if FAA more completely implemented the program and addressed other challenges. Most of FAA’s inspections of those airlines were not risk- based. For example, as shown in figure 2 from fiscal years 2002 through 2004, SEP—a risk-based approach—guided only 23 percent of the inspection activities for the top 25 SEP airlines in terms of the number of enplanements. The remaining 77 percent of inspection activities were identified through NPG, a process that is not risk-based or system safety oriented. Although inspectors can replace NPG-identified activities with SEP-identified activities that they deem address a greater safety risk, we found that FAA inspectors interpret agency emphasis on NPG as discouraging this practice. To address this issue, we recommended that FAA improve communication with and training of inspectors in areas of system safety and risk management. In response to our recommendations, FAA revised its guidelines to require inspectors and managers to ensure that risk information is used and updated its SEP training course to reflect that change. Since FAA’s focus on system safety represents a cultural shift in the way the agency oversees the aviation industry, it will be important for FAA to monitor the implementation of system safety and risk management principles. We recommended that FAA establish a continuous evaluative process for its activities under SEP, but the agency does not intend to set up a process since it expects to eliminate the SEP program after December 2007, which is its deadline for moving all commercial airlines to the ATOS program. If the deadline slips, we believe our recommendation remains valid. Furthermore, FAA’s plans to dissolve the SEP program after moving all commercial airlines to ATOS will shift the inspectors workloads and present a challenge to FAA’s inspection oversight process. As FAA shifts airlines to ATOS, it will also move inspectors to the program. Unlike SEP inspectors, ATOS inspectors are dedicated to an airline and generally cannot be used to inspect other entities. SEP inspectors, on the other hand, have other duties in addition to overseeing airlines—such as certifying and approving aircraft types; overseeing repair stations, designees, and aviation schools; and investigating accidents. For example, our analysis of FAA data indicated that, for fiscal years 2002 through 2004, about 75 percent of SEP inspectors had responsibility for more than 3 entities, and about half had responsibility for more than 15. As inspectors are transitioned to ATOS, the remaining SEP inspector workforce will have to add those other entities to their workload. Furthermore, ATOS requires more inspectors per airline than SEP. For example, when FAA recently transitioned four airlines to ATOS, the total size of the four inspection teams increased 30 percent, from 73 to 95 inspectors. With the expansion of the ATOS program, it will be important to monitor the magnitude of the shift in resources and the effect it may have on FAA’s overall capability to oversee the industry as well as any changes to the current ATOS program that may be required by the expansion. An important part of FAA’s safety oversight system are designee programs, through which FAA authorizes about 13,400 private individuals and 218 organizations to act on its behalf to conduct safety certification activities that FAA considers to be non-safety critical. We reported that designees perform about 90 percent of certification-related activities, thus greatly leveraging the agency’s resources and enabling inspectors to concentrate on what FAA considers the most safety-critical activities. However, concerns about the consistency and adequacy of designee oversight by FAA have been raised by experts and other individuals we interviewed. For example, designees and industry officials that we spoke with indicated that FAA’s level of oversight and interpretation of rules differ among regions and among offices within a region, which limits FAA’s assurance that designees’ work is performed uniformly in accordance with FAA’s standards and policy, the primary goal of which is the safety of U.S. aviation. To improve management control of the designee programs, and thus increase assurance that designees meet FAA’s performance standards, we recommended that FAA develop mechanisms to improve the compliance of FAA program and field offices with existing policies. In response to our recommendations, FAA has, among other things, established a designee quality assurance office to address inconsistent and nonstandard oversight issues among offices. FAA has also developed a survey that will collect information from individuals who recently worked with designees, such as pilots who recently received their license through a designee, to gather information that can be used to continually improve designee programs. To increase FAA’s assurance that its designees are meeting FAA’s safety standards, it will be important for FAA to continue these activities, which are in the early stages of development or implementation, especially as the agency moves to replace certain designee programs with an organizational designation authorization (ODA). ODA would expand the number and types of organizational designees and further transform FAA’s role to that of monitoring the performance of others. In October 2005, FAA issued a final rule that established the ODA program and provides for the phasing out of organizational designees by November 2009. By that time, the current 218 organizational designees will have to apply for and be granted status as an ODA. In August 2006, FAA issued an order that establishes procedures for the ODA program, including the capability to expand the activities that may be delegated out. Under the program, FAA will focus on the performance of organizations rather than the individuals within the organization who carry out the delegated functions. As FAA makes these changes to its designee programs that remove FAA from direct oversight of the individuals performing the delegated activities, it will be important for the agency to adhere to its policy of using designees only for less safety-critical work. It will also be important for FAA to have the data and evaluative processes, which we discuss later in this testimony, to effectively monitor the new program. FAA is also becoming increasingly removed from overseeing airline maintenance. In recent years, in an attempt to reduce costs, airlines have increasingly contracted out maintenance. For example in 2000, 44 percent of major air carriers’ maintenance expenses were attributable to outsourcing; in 2004, it had increased to 54 percent. However, FAA’s inspection activities have remained focused on air carriers’ in-house maintenance, according to DOT’s Inspector General. FAA’s enforcement process, which is intended to ensure industry compliance with safety regulations, is another important element of its safety oversight system. FAA assesses legal sanctions against entities or individuals that do not comply with aviation safety regulations. Such sanctions are intended to deter future violations. However, we found that the effect of FAA’s legal sanctions on deterrence is unclear, and that recommendations for sanctions are sometimes changed on the basis of factors not associated with the merits of the case. For fiscal years 1993 through 2003, attorneys in FAA’s Office of the Chief Counsel authorized a 52 percent reduction in the civil monetary penalties assessed (from a total of $334 million to $162 million). FAA officials told us the agency sometimes negotiate lower fines, thereby reducing sanctions to close cases more quickly and reduce FAA attorneys’ caseloads. Economic literature on deterrence suggests that although negative sanctions (such as fines and certificate suspensions) can deter violations, if violators expect sanctions to be reduced, they may have less incentive to comply with regulations. In effect, it becomes more difficult to achieve the goal of preventing future violations when the penalties for present violations are lowered for reasons not related to the merits of the case. Recent changes that FAA has made to its enforcement program may lead to more uniformly set fines and, thus, potentially less need to revise fines. Prior to September 2005, the initial recommendation to use administrative actions (such as warning notices and letter of correction) or legal sanctions (such as fines or suspension of operating certificates) was based on the judgment of the inspectors. If inspectors recommended a legal sanction, they then consulted FAA’s sanction guidance policy to determine the amount of the proposed penalty. In September 2005, FAA adopted changes to its enforcement program that incorporated system safety risk management principles and established explicit criteria for inspectors to use in making an initial enforcement recommendation. As soon as FAA investigators have gathered sufficient information to categorize the safety risk and the conduct (i.e., whether it was intentional, reckless, or systemic), they prepare a risk statement that describes the hazard created by the act and the potential consequence of that hazard. An example of a risk statement is “an aircraft that operates in Class B airspace without a clearance providing separation from other aircraft could cause a mid-air collision.” The investigators then review the risk statement to determine the severity of the hazard (using a scale of catastrophic, critical, marginal, or negligible) and the likelihood of the worst credible outcome (using a scale of frequent, occasional, or remote). Based on these assessments, investigators apply a decision tool that determines the type of action (legal or administrative) to take against an individual or business. Inspectors no longer have the responsibility of recommending a specific fine level. It is too early to determine if these changes to the enforcement program have resulted in a more uniform application of penalties and fewer penalty reductions. Effective processes for evaluating FAA’s safety oversight programs, along with accurate nationwide data on those programs would provide FAA’s program managers and other officials with assurance that the programs are having their intended effect, especially as FAA’s oversight becomes more indirect. Such processes and data are also important because FAA’s workforce is dispersed worldwide—with thousands of staff working out of more than 100 local offices—and because FAA’s use of a risk-based system safety approach represents a cultural shift from its traditional inspection program. The experiences of successful transformations and change management initiatives in large public and private organizations suggest that it can take 5 to 7 years or more until such initiatives are fully implemented and cultures are transformed in a sustainable manner. As a result, evaluation is important to understanding if the cultural shift has effectively occurred. Our most recent work has shown that FAA had not evaluated its safety programs, and we recommended that the agency establish continuous evaluative processes for the SEP program, designee programs, industry partnership programs, and enforcement program. FAA has made recent progress in implementing some of these recommendations. For example, FAA has scheduled audits of all its designee programs, to be completed by the end of fiscal year 2009, and established a delegation steering group that first met in August 2006 and will be responsible for agencywide monitoring of the designee programs for compliance with program policies and evaluating the effectiveness of the designee programs. Additionally, as FAA implements its new enforcement policy, it has established procedures to monitor the new policy on a quarterly basis and to recommend process improvements based on the information collected. However, FAA does not plan to evaluate the SEP program because it intends to discontinue the program after December 2007. Yet, FAA’s ability to evaluate its programs is hindered by its lack of useful nationwide data. For example, we found that FAA’s oversight of designees was hampered, in part, by the limited information of designee’s performance contained in the various designee databases. These databases contain descriptive information on designees, such as their types of designations and status (i.e., active or terminated). More complete information would allow the agency to gain a comprehensive picture of whether staff are carrying out their responsibilities to oversee designees. To improve management control of the designee programs, and thus increase assurance that designees meet the agency’s performance standards, we recommended that FAA improve the consistency and completeness of information in the designee databases. To address this recommendation, FAA has established the Designee Integration User Group, which expects to begin work in September 2006 on an automated information tool that will track data on all designees. We also found problems with the accuracy or completeness of data in the SEP and enforcement programs, which FAA has recently taken steps to begin addressing. FAA’s use of a risk-based system safety approach to inspections requires inspectors to apply data analysis and auditing skills to identify, analyze, assess, and control potential hazards and risks. To effectively identify safety risks, inspectors must be well-trained in the system-safety approach and have sufficient knowledge of increasingly complex aircraft, aircraft parts, and systems. It is also important that FAA’s large cadre of designees is well-trained in federal aviation regulations and FAA policies. FAA has made training an integral part of its safety inspection system by establishing mandatory training requirements for its workforce as well as designees. Although FAA provides inspectors with extensive training in federal aviation regulations; inspection and investigative techniques; and technical skills, such as flight training for operations inspectors, we have identified weaknesses with the training program. The agency provides designees with an initial indoctrination that covers federal regulations and agency policies, and refresher training every 2 to 3 years. We have reported that FAA has generally followed effective management practices for planning, developing, delivering, and assessing the impact of its technical training for safety inspectors, although some practices have yet to be fully implemented. Appendix I describes the extent to which FAA follows effective management practices in each of these four areas. Some examples follow: In developing its training curriculum for inspectors, FAA has developed courses that support changes in inspection procedures resulting from regulatory change or agency initiatives. On the other hand, FAA develops technical courses on an ad hoc basis rather than as part of an overall curriculum for each inspector specialty—such as air carrier operations, maintenance, and cabin safety—because the agency has not systematically identified the technical skills and competencies each type of inspector needs to effectively perform inspections. In delivering training, FAA has established clear accountability for ensuring that inspectors have access to technical training, has developed a way for inspectors to choose courses that meet job needs and further professional development, and offers a wide array of technical and other courses. However, both FAA and its inspectors recognize the need for more timely selection of inspectors for technical training. To address some of these issues, we recommended, among other things, that FAA ensure that inspector technical training needs are identified and met in a timely manner by systematically assessing inspectors’ technical training needs and better aligning the timeliness of training to when inspectors need the training to do their jobs. In addition, we have identified gaps in the training provided to SEP inspectors, and have recommended that FAA improve inspectors’ training in areas such as system safety and risk management to ensure that these inspectors have a complete and timely understanding of FAA’s policies in these areas. We identified similar competency gaps related to designee oversight. For example, FAA does not require refresher training on how to oversee designees, which increases the risk that inspectors do not retain the information, skills, and competencies required to perform their oversight responsibilities. We recommended that FAA provide additional training for staff who directly oversee designees. FAA has begun to address these recommendations. For example, FAA plans to release five Web-based courses by the end of 2006, which will allow the agency to provide training closer to the time that employees need it. Also, FAA has instituted an electronic learning management system that provides for employee input to their own learning plans. FAA has also updated the SEP training course to reflect recent policy changes that emphasize the importance of risk management. Finally, FAA has begun developing a new designee oversight training course that is planned to be ready by the summer of 2007. It is important that FAA’s inspection workforce, designees, and FAA- certified aviation mechanics are knowledgeable about the latest technology changes. While we did not attempt to assess the technical proficiency that FAA’s workforce requires and will require in the near future, FAA officials said that inspectors do not need a substantial amount of technical training courses because inspectors are hired with a high degree of technical knowledge of aircraft and aircraft systems. They further indicated that inspectors can sufficiently keep abreast of many of the changes in aviation technology through FAA and industry training courses and on-the-job training. Similarly, we did not identify any specific gaps in the competencies of designees. However, in its certification program for aviation mechanics, we found that FAA standards for minimum requirements for aviation courses at FAA-approved aviation maintenance technician schools and its requirements for FAA-issued mechanics certificates do not keep abreast with the latest technologies. In 2003, we reported that those standards had not been updated in more than 50 years. We recommended that FAA review the curriculum and certification requirements and update both. In response to this recommendation, Vision 100—Century of Aviation Reauthorization Act, which was passed December 12, 2003, required FAA to update the standards 1 year after enactment of the law and to conduct reviews and updates every 3 years after the initial update. FAA issued an Advisory Circular in January 2005 that described suggested curriculum changes; however, the agency has not updated the certification requirements for mechanics. FAA faces a number of key safety challenges, including meeting its performance target for commercial air carrier safety, which it will not meet in fiscal year 2006 because of recent fatal accidents. With four fatal commercial air carrier accidents in fiscal year 2006, the agency will not meet its target of 0.018 fatal accidents per 100,000 departures. Moreover, for the past 3 years, FAA did not meet its performance target for severe operational errors, which occur when aircraft do not maintain safe distances in the air; as of June 2006, the agency was slightly below its target level of 4.27 severe operational errors per million activities. In addition, although general aviation accidents have, on the whole, decreased in recent years, general aviation safety is also a concern because of the large number of fatal accidents every year—an average of 334 fatal accidents have occurred annually since 2000. Furthermore, other industry sectors, such as cargo operations and on-demand air ambulances, have poor safety records, as mentioned earlier. It will be important for FAA to develop the appropriate strategies to deal with the challenges posed by these safety records and to continuously monitor safety information to identify trends and early warnings of other safety problems. Also as described earlier, FAA also faces a number of challenges to several of its oversight programs. Specifically, FAA’s rapid expansion of ATOS, by transferring about 100 airlines and additional inspectors to the program over about 2 years, will cause shifts in inspector workload that may affect the agency’s ability to oversee other parts of the industry. Furthermore, some activities, such as FAA’s creation of ODAs and the trend for airlines to outsource maintenance, will remove FAA from direct oversight. It will be important for FAA to have robust data and continuous evaluative processes to monitor such activities and program changes in order to ensure they are not having a negative effect on safety. Meeting the challenges posed by recent safety trends and program changes will be exacerbated by other challenges in human capital management; the acquisition and operation of new safety enhancing technologies; and new types of vehicles, such as very light jets (VLJ), that may place additional workload strains on FAA inspectors and air traffic controllers. FAA’s ability to oversee aviation safety will be affected by recent and anticipated trends in attrition of its inspectors compounded, in some cases, by delays in hiring and increased workload. For example, for fiscal years 2005 through 2010, FAA estimated that over 1,100 safety inspectors who oversee commercial airlines and general aviation will leave the agency, with an average loss due to attrition of about 195 inspectors per year. However, FAA’s efforts to hire more inspectors have been hindered by a budget situation in 2005 that resulted in a hiring freeze during part of that year. During the hiring freeze, FAA filled safety-critical positions, such as principal inspectors, through internal appointments. As other safety inspectors left, they were not replaced and their workload was divided among the remaining inspectors. Concerned about the need for additional safety inspectors, for fiscal year 2006, Congress provided additional funding over the budget request to FAA with the expectation that the funding would increase the safety staff by 248. This increase in funding would allow for hiring an additional 182 safety inspectors in Aviation Flight Standards (AFS) and an additional 66 inspectors and engineers in Aircraft Certification Service (AIR). However, as a result of a rescission and unfunded pay raises for fiscal year 2006, FAA lacks the funds to hire 67 staff of the expected 248 new staff. As a result, FAA’s revised hiring target is 139 AFS staff and 42 AIR staff. As of August 2006, FAA has hired an additional 25 AFS and 28 AIR staff. (See fig. 3.) According to FAA, it has a pipeline of applicants and expects to reach its goal of filling the 181 slots by the end of the fiscal year. However, the actual number of aviation safety inspector slots needed is unknown, because FAA lacks staffing standards for safety inspectors. The National Academy of Sciences, under a congressional mandate, has just completed a study for FAA to estimate staffing standards for inspectors to ensure proper oversight over the aviation industry. During the coming decade, FAA will need to hire and train thousands of air traffic controllers to replace those who will retire and leave for other reasons. FAA estimates it will lose 10,291 controllers, or about 70 percent of the controller workforce, for fiscal years 2006 through 2015, primarily due to retirements. To replace these controllers and to accommodate forecasted increases in air traffic and expected productivity increases, FAA plans to hire a total of 11,800 new controllers over the next 10 years, or 1,180 per year, on average. By the end of fiscal year 2006, FAA expects to hire 930 controllers. As of August 2006, FAA had hired 920. Figure 4 shows the estimated losses each year as well as the number of planned hires. Recent events may exacerbate the staffing situation. New data indicate that controllers are retiring at a faster rate than FAA anticipated. In its 2004 workforce report, FAA projected 341 retirements for fiscal year 2005; 465 controllers actually retired—36 percent more than FAA’s estimate. In addition, a new contract with the air traffic controllers union was recently implemented by FAA after lengthy negotiations. Under this new contract, most current air traffic controllers would continue to receive their existing base salaries and benefits, which may remove a financial incentive to continue working past their retirement eligibility date, while newly hired controllers would be hired at lower wage rates, which may affect FAA’s ability to hire new controllers. FAA has maintained that this contract will result in significant cost savings, freeing up resources for other critical agency needs. It is too soon to know what effect, if any, the new contract may have on retirement decisions. In addition to the challenge of hiring large numbers of controllers, FAA will face a challenge in training its new hires expeditiously so that it can plan to have the right number of controllers in the right facilities when they are needed. According to FAA, its ability to train the new controllers depends upon several factors, including hiring a relatively even number of controllers each year, reducing the time it takes to hire a controller, and reducing the duration of training. FAA estimates that because of the long training time, it must hire enroute controllers an average of 3 to 5 years in advance of when they are needed. FAA is taking actions to address these issues. For example, in line with our recommendation, a recent change to the training program allows individuals who complete collegiate requirements under the Air Traffic Collegiate Training Initiative to bypass the first 5 weeks of initial FAA Academy training required for controllers. FAA also faces the challenge of ensuring that control facilities have adequate staffing based on their unique traffic demands and the accuracy of FAA’s retirement forecast. Historically, FAA has computed staffing standards, which are the number of controllers needed on a systemwide basis, but distribution of these totals to the facility level was a negotiated process. The staffing standards did not take into account the significant differences in complexity and workload among FAA’s 300 terminal and enroute control facilities, which can lead to staffing imbalances. FAA has begun developing and implementing new staffing standards that use an algorithm that incorporates traffic levels and complexity of traffic at the facility level to determine the number of controllers needed, according to an FAA official. As FAA further refines its process for determining controller staffing needs, the ultimate objective is to assess the traffic level and complexity on a sector-by-sector basis to develop more accurate controller staffing requirements. To enhance runway safety, FAA intends to rely on new technologies— beginning with the Airport Movement Area Safety System (AMASS) and Airport Surface Detection Equipment Model X (ASDE-X)—that are expected to reduce runway accidents. AMASS and ASDE-X are instrumental in mitigating runway incursions and operational errors. However, FAA faces challenges—such as a reduced number of airports scheduled to receive the equipment, schedule delays, and cost increases— that affect its reliance on the technologies. FAA’s original plans called for 34 airports to receive AMASS and 35 airports to receive ASDE-X (see app. II). In total, 59 airports were to receive one or both technologies, but this number was reduced to 44 in August 2006 after FAA canceled plans to deploy ASDE-X at 15 of the originally scheduled airports. FAA plans to take these 15 systems and upgrade certain airports that already have AMASS based on the rationale that maximum benefit is achieved by deploying ASDE-X to airports with larger traffic counts or more complex operations. This decision leaves 15 airports (see fig. 5) that were supposed to receive ASDE-X without either advanced technology system. Since the anticipated future increase in air traffic from commuter airlines and very light jets are likely to be at smaller airports that lack the advanced technologies, it will be important for FAA to periodically re-evaluate its deployment strategy. In addition to reducing the number of facilities selected to receive the newer technology, FAA has amended the cost and extended the implementation dates for the ASDE-X program (see fig. 6). The 35 ASDE-X systems were originally scheduled to be implemented by 2007. As of August 2006, FAA had moved that date to 2011. FAA estimates the total facilities and equipment cost of the ASDE-X program at about $550 million, which is approximately $40 million more than we reported in 2005. The costs of these new technologies mean that they may never be deployed at all airports; therefore, it will be important for FAA to continue prioritizing and maximizing its resources. To ensure a national airspace system that is safe, efficient, and capable of meeting a growing demand of air transportation that is expected to triple by 2025, the Joint Planning and Development Office (JPDO) was created within FAA to plan for and coordinate the longer-term transformation to the “next generation air transportation system” (NGATS). JPDO was created in 2003 to develop an integrated plan for NGATS and to include in the plan, among other things, a description of the demand and required performance characteristics of the future system, as well as a high-level, multiagency road map and concept of operations for the future system. FAA and JPDO face the challenge of adequately involving stakeholders in the development of NGATS to ensure that the system meets users’ needs, especially air traffic controllers who will be end users of the new technology and responsible for using it to maximize safety and efficiency. In the past, air traffic controllers were permanently assigned to FAA’s major system acquisition program offices and provided input into air traffic control modernization projects. In June 2005, FAA terminated this arrangement because of budget constraints. According to FAA, it now plans to obtain the subject-matter expertise of air traffic controllers or other stakeholders as needed in major system acquisitions. It remains to be seen whether this approach will be sufficient to avoid problems such as FAA experienced when inadequate stakeholder involvement in the development of new air traffic controller workstations (known as the Standard Terminal Automation Replacement System (STARS)) contributed to unplanned work, significant cost growth, and schedule delays. The changing aviation landscape poses further challenges for FAA. It is expected that within the next few years several hundred VLJs will be in operation. FAA estimates that if 2 percent of airline passengers switch to VLJs, air traffic controllers will have to handle three times more take-offs and landings than currently. Additionally, the industry predicts there may be as many as 5,000 to 10,000 VLJs operating in the national airspace system by 2020. VLJ manufacturers are reporting advance sales of thousands of these new jets, their customers include air taxis, charter operators, and private owners. In July 2006, FAA granted the first provisional certificate for a VLJ to Eclipse Aviation Corporation. The provisional certificate allows existing planes to be flown, but new ones cannot be delivered to customers until the FAA grants a type certificate. According to Eclipse Aviation, it has orders for over 2,350 aircrafts. DayJet, which provides on-demand jet service, expects to be operating 50 Eclipse VLJs by the end of 2007. In September 2006, FAA granted the first type certificate to Cessna Aircraft Company. (See fig. 7.) Five other companies are in the process of being issued certificates by FAA. If this sector expands as quickly as expected, FAA inspectors could face workload challenges to expeditiously issue and monitor certificates. In addition, air traffic controllers could face the challenge of further congested air space, especially at and near smaller airports, where VLJs are expected to be prevalent because of their smaller size and shorter runway requirements. Unmanned aerial vehicles (UAV) are another emerging sector that will add to FAA’s workload and may require additional FAA expertise. While historically UAVs have been used primarily by the Department of Defense in military settings outside the United States, there is growing demand to operate UAVs domestically in the national airspace system. (See fig. 8.) Federal agencies such as the Customs and Border Protection Service and the Federal Emergency Management Agency and state and local law enforcement agencies are interested in UAVs for purposes such as border security, search and rescue, firefighting, and other law enforcement and homeland security initiatives. Some of these activities are taking place today. For example, Customs conducts surveillance along the border with Mexico. UAVs are also an emerging sector of the commercial aviation industry, and possible commercial uses include fire detection and firefighting management, digital mapping, communications and broadcast services, and environmental research and air quality management control. Currently, few regulations or guidelines exist for UAVs or UAV-related technology. FAA issues a certificate of authorization for the operation of a UAV and the airspace is restricted during the period of operation. In 2006, FAA has issue 62 certificates of authorization for UAVs and another 35 applications are pending review. FAA is receiving numerous inquiries from federal agencies, and from local, county, and state governments about how to operate UAVs in the national airspace system. FAA has established an Unmanned Aircraft Program Office, responsible for developing the regulatory framework and plan for the safe integration of UAVs into the national airspace system. FAA faces the challenge of working with industry to develop consensus standards for command and control redundancies in case there is a disruption in communication with the UAV, and detect and avoid capabilities so that UAVs can sense and avoid other aircraft. Such standards will be necessary before UAVs can be routinely integrated into the national airspace system. Until UAVs are completely integrated into the national airspace system, FAA will continue to evaluate each flight on a case-by-case basis, adding to the agency’s workload. Space tourism is an additional emerging sector that FAA is beginning to respond to. Tourist launches are expected to take place at inland locations and may have more impact on the national airspace system than previous unmanned commercial space launches, which occurred at federal launch sites near or over oceans. While UAVs pose a learning curve for safety inspectors, engineers, and air traffic controllers, space tourism launches pose a learning curve for FAA’s commercial space engineers who are responsible for licensing and monitoring commercial space launches and nonfederal launch sites (called spaceports). The prospect for commercial space tourism materialized in 2004 when SpaceShipOne, developed by Scaled Composites, flew to space twice, achieving a peak altitude of about 70 miles to win the Ansari X Prize. Several entrepreneurial launch companies are planning to start taking paying passengers on suborbital flights within the next few years. Virgin Galactic intends to enter commercial suborbital space flight service around 2008, launching from a spaceport in New Mexico, and according to the company, plans to carry 3,000 passengers over 5 years, with 100 individuals having already paid the full fare of $200,000. Several other companies, including former Ansari X Prize competitors, continue to develop their vehicles for space tourism. Several spaceports are being developed to accommodate anticipated commercial space tourism flights and are expanding the nation’s launch capacity. As of August 2006, the United States had seven federal launch sites, and seven spaceports, and an additional eight spaceports have been proposed (see fig. 9). We will be issuing a report later this year on FAA’s oversight of commercial space launches. Maintaining U.S. position as a global leader in aviation safety calls for robust participation in the setting of international safety standards. The International Civil Aviation Organization (ICAO), a United Nations organization, develops standards and recommended practices for aviation safety and security for 188 member states. In 2002, the Commission on the Future of the United States Aerospace Industry reported that the United States had not devoted enough resources to ICAO and was, therefore, losing its position as the de facto standard setter. Furthermore, the position of U.S. ambassador to ICAO, which was filled earlier this year, had been vacant for more than a year, which may have affected the U.S. impact on international aviation issues. To ensure that qualified U.S. applicants apply for U.S. positions at ICAO, FAA has supported a number of activities, including outreach efforts, incentive pay programs, and a fellowship program that sends FAA employees to work at ICAO for up to 12 months. However, as of December 2005, FAA had filled only 13 of the 31 positions allocated to the United States at ICAO. FAA faces difficulty in filling the allocated positions for reasons beyond its control. For example, while FAA can recruit applicants, it does not make the final hiring decisions. With unfilled positions at ICAO, it will remain important for FAA to continue these efforts to enhance the presence of the United States in the international aviation community. For further information on this testimony, please contact Dr. Gerald L. Dillingham at (202) 512-2834 or dillinghamg@gao.gov. Individuals making key contributions to this testimony include Teresa Spisak, Jessica Evans, Colin Fallon, David Hooper, and Rosa Leung. Aviation Safety: FAA’s Safety Oversight System Is Effective but Could Benefit from Better Evaluation of Its Programs’ Performance. GAO-06-266T. Washington, D.C.: November 17, 2005. Aviation Safety: System Safety Approach Needs Further Integration into FAA’s Oversight of Airlines. GAO-05-726. Washington, D.C.: September 28, 2005. Aviation Safety: FAA Management Practices for Technical Training Mostly Effective; Further Actions Could Enhance Results. GAO-05-728. Washington, D.C.: September 7, 2005. Aviation Safety: Oversight of Foreign Code-Share Safety Program Should Be Strengthened. GAO-05-930. Washington, D.C.: August 5, 2005. Aviation Safety: FAA Needs to Strengthen the Management of Its Designee Programs. GAO-05-40. Washington, D.C.: October 8, 2004. Aviation Safety: Better Management Controls Are Needed to Improve FAA’s Safety Enforcement and Compliance Efforts. GAO-04-646. Washington, D.C.: July 6, 2004. Aviation Safety: Information on FAA’s Data on Operational Errors at Air Traffic Control Towers. GAO-03-1175R. Washington, D.C.: September 23, 2003. Aviation Safety: FAA Needs to Update the Curriculum and Certification Requirements for Aviation Mechanics. GAO-03-317. Washington, D.C.: March 6, 2003. Aviation Safety: FAA and DOD Response to Similar Safety Concerns. GAO-02-77. Washington. D.C.: January 22, 2002. Aviation Safety: Safer Skies Initiative Has Taken Initial Steps to Reduce Accident Rates by 2007. GAO/RCED-00-111. Washington, D.C.: June 30, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The U.S. commercial aviation industry has had an extraordinary safety record in recent years. However, expected increases in air-traffic--including the introduction of new vehicles into the national airspace, such as unmanned vehicles and very light jets--and human resource issues, present challenges that have the potential to strain the existing safety oversight system. GAO's testimony focuses on these questions: (1) How is the Federal Aviation Administration (FAA) ensuring that the areas of highest safety risk are addressed? (2) How is FAA ensuring that its staff maintain the skills and knowledge to consistently carry out the agency's oversight programs? and (3) What are the key safety challenges facing FAA? This statement is based on our recent reports on FAA's inspection oversight programs, industry partnership programs, and enforcement and training programs. It is also based on interviews with FAA and relevant industry officials. FAA's aviation safety oversight system includes programs that focus on identifying and mitigating risks through a system safety approach and by leveraging resources, but as FAA is still developing evaluations for some of these programs, it remains unclear the extent to which they are achieving their intended effects. FAA's system safety approach for overseeing airlines--through the Air Transportation Oversight System (ATOS) and Surveillance and Evaluation Program (SEP)--uses inspection staff efficiently by prioritizing workload based on areas of highest risk and ensuring that corrective actions have been taken. However, recent and planned changes that would move inspections of about 100 airlines from SEP to ATOS will shift inspector workload and might affect FAA's capability to oversee the industry. FAA also concentrates its limited staff resources on the most safety-critical functions and through its designee programs delegates other, less critical activities to designees. Designees perform about 90 percent of certification-related activities, and thus allow FAA to better leverage resources. GAO's recent work found some weaknesses in FAA's system safety approach and recommended that FAA develop effective evaluative processes and accurate nationwide data on its safety oversight programs to address these weaknesses so that program managers and other officials have assurance that the programs attain their intended effect. FAA has begun implementing those recommendations but does not plan to evaluate SEP, which it intends to discontinue after December 2007. Training--including mandatory training requirements for FAA's workforce as well as designees--is an integral part of FAA's safety oversight system. GAO has reported that FAA has generally followed effective management practices for planning, developing, delivering, and assessing the impact of its technical training for safety inspectors, although some practices have yet to be fully implemented. However, several actions could improve the results of its training efforts. For example, FAA develops technical courses on an ad hoc basis rather than as part of an overall curriculum for each type of inspector, such as inspectors of operations or cabin safety, because the agency has not systematically identified the technical skills and competencies each type of inspector needs to effectively perform inspections. FAA has recognized the need to improve its training program in this and other areas. FAA faces several key safety challenges, including not meeting its performance target for commercial air carrier safety this year because of recent fatal accidents. Further, FAA's ability to oversee aviation safety will be affected by recent and anticipated trends in inspector and air traffic controller attrition. Also, FAA intends to enhance runway safety by relying on new technologies that are expected to reduce runway accidents. However, schedule delays and cost increases challenge FAA's ability to deploy this technology. Finally, new types of aviation vehicles are changing the aviation industry and will require new areas of expertise for FAA's inspectors and controllers. |
The DFC Support Program’s two major goals are to: 1. establish and strengthen collaboration among communities, private non-profit agencies, and federal, state, local, and tribal governments to support the efforts of community “coalitions” to prevent and reduce substance abuse among youth; and 2. reduce substance abuse over time among youth and adults by addressing the factors in a community that increase the risk of substance abuse and promoting the factors that minimize the risk of substance abuse. Coalitions receiving grant funds through the program are obliged to make progress toward four core outcome measures. These relate to the prevalence of drug use among youth in their communities over the past 30 days, youth’s perceptions of the risk, and the separate perceptions of parental and peer disapproval of drug use—each of which is discussed later in the report. Under the DFC Support Program, ONDCP provides federal grants to coalitions that have established sustainable and accountable anti-drug efforts involving every major sector of a community, such as law enforcement and schools. For the purposes of this report we refer to coalitions as grantees. According to ONDCP officials, a DFC coalition is collaboration among groups, such as parents and businesses who agree to work together toward a common goal of building a safe, healthy, and established through a locally‐based arrangement for cooperation and drug‐free community. DFC grants are intended to support community based coalitions and the activities they carry out. Funds are granted to the coalition, not a particular sector or sector member. DFC coalitions are broad-based groups consisting of representatives of youth, parents, businesses, the media, law enforcement, religious or other civic groups, health care professionals, and other organizations involved in reducing substance abuse in their communities, especially among youth—as illustrated in figure 1. ONDCP funds four types of DFC Support Program grants: (1) New; (2) Continuation; (3) Mentoring; and (4) Mentoring Continuation. For the purposes of this review, we focused on New and Continuation DFC grants as they constitute the majority of grants awarded. 1. New grants represent those openly competing for their 1st or 6th year of DFC funding. 2. Continuation grants represent annual “in-cycle” grants for years 2 through 5, or 7 through 10 of DFC funding. 3. Mentoring grants represent the first in a 2-year grant awarded to existing coalitions to support their work to create new DFC coalitions. Mentoring Continuation grants represent the second year of the 2- year award. 4. Each new and continuation grant awards up to $125,000 per fiscal year and mentoring grants limit awards to $75,000 per fiscal year. By statute, eligible coalitions may receive a new grant for 1 year and then apply for a 1-year continuation grant in each of the subsequent 4 years— for a total first round grant period of 5 years. After the first 5 years, grantees can apply again for a second 5 year round—the maximum allowable term is 10 years—and the 6th year begins with another new grant. Grantees can apply for continuing grants again in each of the 4 years thereafter. According to ONDCP, it bases decisions on whether or not to continue a grant on the extent to which the coalition has (1) made satisfactory progress in its efforts to reduce youth substance abuse and (2) complied with all the terms and conditions of its award. To meet the statutory requirements of the DFC Support Program for initial eligibility—years 1 and 6—a coalition must: submit an application to the ONDCP Administrator; consist of one or more representatives from each of the 12 sectors— at least one representative per sector—as illustrated in figure 1; demonstrate that the representatives of the coalition have worked together on substance abuse reduction initiatives for at least 6 months (prior to applying); demonstrate substantial participation from volunteer leaders in the have as its principal mission the reduction of substance abuse in a comprehensive and long-term manner, with a primary focus on youth in the community; describe and document the nature and extent of the substance abuse problem in the community; provide a description of the substance abuse prevention and treatment programs and activities underway at the time of the grant application and identify substance abuse programs and service gaps in the community; develop a strategic plan to reduce substance abuse among youth; and work to develop a consensus regarding the priorities of the community to combat substance abuse among youth; establish a system to measure and report outcomes; conduct an initial benchmark survey of drug use among youth and provide assurances that the entity conducting the evaluation has sufficient experience in gathering data related to substance abuse among youth or in evaluating the effectiveness of community anti-drug coalitions; and demonstrate that the coalition is an “ongoing concern” by demonstrating that it has established itself as an appropriate legal entity or organization that receives financial support from non-federal sources and has a strategy to solicit substantial financial support from non-federal sources after the expiration of the grant term. For additional information on the statutory requirements for the DFC Support Program, see table 2 in appendix I. In addition to meeting statutory eligibility requirements, grantees must also comply with DFC Support Program terms and conditions. For example, the program requires that grantees must develop a comprehensive 12-Month Action Plan that includes an appropriate strategy for each drug they will be addressing—as well as a mechanism for demonstrating their progress along the way. Further, two grantees may not serve the same zip code, unless both have clearly demonstrated a plan for collaboration. For more information on the additional program requirements for the DFC Support Program, see table 3 in appendix I. In fiscal year 2015, the DFC Support Program’s appropriated budget was approximately $93.5 million, representing just under a quarter of ONDCP’s total budget of about $375 million. As table 1 shows, the total number of DFC Support Program grants increased each fiscal year from 2013 to 2015. According to ONDCP, since the passage of the Drug-Free Communities Act in 1997, the DFC Support Program has funded more than 2,000 coalitions and mobilizes nearly 9,000 community volunteers across the country. ONDCP and SAMHSA have operated the grant program through an inter- agency agreement since 2005 that they update annually. Specifically, ONDCP oversees the strategic planning, bi-annual progress reporting, funding of the DFC Support Program, and SAMHSA conducts day-to-day administration, such as interacting with grantees on a regular basis and reviewing their activities. SAMHSA, as directed by ONDCP also awards a grant to the Community Anti-Drug Coalitions of America (CADCA), which provides technical assistance and training to grantees in order to enhance their capacity. For example, CADCA trains grantees in effective community problem-solving strategies and teaches them how to assess their local substance abuse-related problems and develop responsive action plans. The DFC Support Program operates on a yearly grant cycle, working through a given calendar year. The DFC grant life cycle follows a typical federal grant life cycle, as shown in figure 2. ONDCP and SAMHSA require grantees to submit semi-annual progress reports through an ONDCP system called DFC Management and Evaluation (DFC Me). These reports contain descriptions of the activities the grantees conducted in supporting the program’s two broad goals, as well as their progress against the program’s four core measures: 1. Past 30-Day Prevalence of Use—youth who reported use of alcohol, tobacco, marijuana, or illicit use of prescription drugs at least once in the past 30 days. 2. Perception of Risk—youth who reported that the use of alcohol, tobacco, marijuana, or illicit use of prescription drugs is harmful. 3. Perception of Parental Disapproval—youth who reported their parents feel the regular use of alcohol, tobacco, and marijuana, or illicit use of prescription drugs is wrong or very wrong. 4. Perception of Peer Disapproval—youth who reported their friends thought it would be “wrong or very wrong” for them to drink alcohol, engage in any tobacco or marijuana use, or illicit prescription drug use. See appendix II for additional details on ONDCP’s core measures. As we have previously reported, conducting grant management processes like those illustrated above, in accordance with internal control standards, statutory requirements, and leading practices for collaborating agencies is essential for achieving program outcomes. ONDCP’s and SAMHSA’s efforts to jointly manage the DFC Support Program are consistent with relevant, key collaboration practices. Specifically, in our prior work, we have found that collaboration is enhanced when partners follow certain key practices, such as (1) defining a common outcome, (2) agreeing on roles and responsibilities, and (3) establishing compatible policies, procedures, and other means to operate across agency boundaries. We have also recognized in prior work that collaborating agencies should work together to define and agree on their respective roles and responsibilities, including how the collaborative effort will be led, designating a lead body, establishing oversight for the initiative, and employing mechanisms to implement their efforts. DFC grantees have engaged in a range of activities, including drug abuse education campaigns and efforts to enhance enforcement, and they report on these activities in their semi-annual progress reports to SAMHSA. ONDCP, through its contractor, routinely reviews the nature and scope of these activities to ensure they fit within one of the Seven Strategies for Community Change. For example, according to ONDCP, one of these seven strategies is “providing information.” As ONDCP reported, from February 2014 through July 2014, to execute this strategy, grantees held 7,338 face-to-face information sessions on topics such as the consequences of youth substance abuse and the importance of drug abstinence, reaching almost 138,000 adults and more than 156,000 youth. Figure 3 illustrates the DFC Support Program’s overarching goal, its specific program goals, ONDCP’s seven strategies for goal attainment, and examples of grantees’ activities aligned with each. Our review of grant files from 30 grantees also revealed diversity in terms of specific activities within the seven strategic categories. For example, grantees reported conducting alcohol prevention outreach activities and events for parents; implementing a county-wide marijuana prevention media campaign; and implementing prescription drug take-back, or collection, events. We also spoke with 10 of the 30 grantees to discuss, among other things, how they are using their funds to implement one or more of the seven program strategies. Some examples include the following. Providing Support: To provide support to youth working on prevention and education efforts, a grantee sponsors a “Yearly Youth Summit,” which is organized partially by their 20 member youth coalition. The youth coalition selects the topics and guest speakers for the summit and invites up to 75 peers who discuss and brainstorm ideas for activities and ways to address a specific substance abuse problem at their school. Additionally, tables are assigned to encourage communication outside an individual’s immediate peer group. Enhancing Skills: To enhance the skills of those in the community to be on alert for and vigilant against potential drug abuse, one grantee sponsored a session for local realtors on precautions to take when preparing for open houses—warning realtors that leftover prescription drugs in medicine cabinets present the potential for abuse among those walking through the home for sale. Enhancing Access/Reducing Barriers: To reduce cultural barriers, one grantee developed signs emphasizing the legal purchase age for alcohol in multiple languages to respect the diversity of languages spoken across its community—as seen in figure 4. The grantee then provided these signs to local business owners. Providing Information: To provide greater information to parents on what drug prevention steps they could take, one grantee chose to address the challenges parents may face when hosting teenage parties at their home. Specifically, this grantee worked with its youth group to identify house parties as a concern in the community and provided parents with information on the consequences of providing alcohol and youth alcohol consumption. The grantee used the slogan to remind parents of their children’s needs: “Be my Parent, not my Bartender,” which they told us parents found particularly compelling. Changing Consequences: To change consequences for adults who host underage drinking parties, another grantee launched an anonymous tip line called “QuikTip,” which led to tips coming in daily to their local 911 call center. One outcome resulting from the QuikTip line was that a New Year’s underage drinking party was reported, which led to law enforcement being dispatched to the party. Modifying/Changing Policies: To modify or enforce policies among its local businesses, one grantee partnered with the District Attorney’s office to create a task force that included business owners, local policy makers, and youth coalition members. This task force made it a priority to ensure that all clerks selling alcoholic beverages were taking mandatory beverage service training and also worked to gain buy-in among task force participants for an increase in alcohol policy compliance checks. The grantee drew upon its 18- to 20-year old members to assist in testing store clerks’ adherence to the under aged drinking laws. Changing Physical Design: To change the appearance of alcoholic beverage packages, another coalition reported that its youth group created stickers as part of a “Keep It Legal” campaign. The group designed and helped place approximately 540 stickers on alcoholic items sold throughout the community that contained a message about the legal drinking age and the consequences of alcohol consumption—as shown in figure 5. ONDCP and SAMHSA have developed standard operating procedures to collect relevant information from new applicants and current grantees and to document grantees’ compliance with eligibility requirements in governing statutes. Per the IAA between ONDCP and SAMHSA, SAMHSA is charged with collecting, analyzing, and reporting the status of grantees’ compliance to ONDCP. The agencies require substantial documentation from grantees in terms of initial and continuing grant applications. Some examples include: A grant application, which includes a grantee’s mission statement, description of its projects, and a 12-month action plan outlining its focus areas and planned activities; Detailed budget narrative and budget sections that accompany the application. These documents, in part, identify sources of non-federal matching funds that grantees receive to meet the DFC funds matching requirement; Resumes and job descriptions for key personnel, such as the program director and project coordinator; Coalition involvement agreements from representatives for all 12 sectors to indicate compliance with the statutory requirement; Two sets of coalition meetings minutes per year, with attendees listed out, to demonstrate coalition membership involvement; Submission of documentation and verification of the organization’s non-profit status, such as maintaining a 501(c)(3) status; Semi-annual progress reports through DFC Me to provide accurate and meaningful statistical representation of youth surveyed on the core measures and activities in each of the geographical areas served by the coalition; Quarterly and annual Federal Financial Reports that demonstrate compliance with the grant’s purposes by accurately documenting its funds transactions; Sustainability plans—required for grantees in year 3 or year 7 of the grant life cycle to show progress to achieve self-sustainment after completing the DFC Support Program; and Letter of Mutual Cooperation between the coalitions with the identification of zip codes served (if applicable) that requires grantees to outline their collaborative efforts. According to ONDCP and SAMHSA officials, their procedures require that all of this documentation is to be stored and safeguarded in SAMHSA’s databases to preserve the record of the grant. The procedures also require that SAMHSA grant management officials review these grantee files to continually monitor grantees’ compliance with eligibility criteria and for SAMHSA to report to ONDCP on any grantee compliance issues. According to the officials, collecting these documents and ensuring that files are complete and accurate also assists with monitoring the grantees’ progress in implementing their stated strategies and action steps. In addition to the documentation required to help ensure grantees’ statutory compliance, ONDCP and SAMHSA developed specific DFC Support Program terms and conditions. They also have procedures in place to help ensure grantees are meeting these requirements. For example, one term and condition of the grant is that grantees must agree in writing to comply with federal grant requirements through the use of a checklist from the U.S. Department of Health and Human Services— SAMHSA’s parent agency. ONDCP and SAMHSA then developed a process to track submission of this checklist, which likewise applies to the submission of all other required documents. Specifically, SAMHSA is to monitor grantees to ensure receipt of the document and when a grantee has not submitted it, SAMHSA’s grant management specialists are to reach out to request it again. If grantee noncompliance becomes an issue, then SAMHSA is to report the concern to ONDCP during routine meetings and by documenting it in a tracking spreadsheet for ONDCP’s review. In addition to procedures in place to help monitor grantees’ compliance with statutory provisions and other terms and conditions of the grant, performance measures and procedures were established to obtain and analyze grantee performance. The procedures include (1) a system to collect data from the grantees on performance measures, and (2) a contract with an evaluation agency to analyze and summarize the performance measures data. Specifically, ONDCP, through its contractor, conducts and issues a national evaluation report based on data collected by grantees. The evaluation report, published annually, provides information on two sets of data: (1) the activities grantees reported, and (2) the outcome data reflecting change on the DFC four core measures, using both qualitative and quantitative data collection and analysis. See appendix II for additional details on the four core measures and trends in grantees’ outcomes. Based on our file review and analysis of the files encompassing the more than 20 types of required documents from 30 grantees, we found that SAMHSA does not consistently follow documentation and reporting procedures to ensure grantees’ compliance with both the statutory provisions and established grant program terms and conditions. Specifically, we found that SAMHSA followed all processes for ensuring that initial applicants had submitted the required documentation before awarding these applicants their initial grant funding. However, SAMHSA was less consistent in its adherence to procedures for confirming documentation for grantees in later years of their grants. We also found that SAMHSA did not effectively track grantee compliance and therefore, had not been accurately reporting to ONDCP on the status of grantees’ required documentation. To conduct our analysis, we reviewed the official grant files for 30 randomly selected grantees that received DFC Support Program funding in fiscal year 2015. Since in any given year, some grantees will be first time grant recipients and others will be more tenured, we widened our review to include the files for fiscal years 2013 and 2014 for any selected grantee in continuation status. Such a mix of grantees in different years of funding allowed us to see variation in the content and depth of the grant documents that ONDCP and SAMHSA require to be on file. For example, all grantees, regardless of their tenure in the program, are required to have progress reports on file. In contrast, only grantees who recently applied for year 3 or year 7 funding, for example, are required to have sustainability plans. We found that SAMHSA adhered inconsistently to its documentation and reporting procedures at different stages of grantees’ tenure. For example, during our file review, we found that all of the 22 grantees in our sample that should have had a complete initial application package in the official grantee file generally had one. However, we also found that: While all 30 grantees in our sample were required to submit progress reports every 6 months, at least one report was missing for 26 grantees. Specifically, for the 30 grantees, a total of 128 semi-annual reports should have been submitted and filed in the official grantee file, but 83 were missing. According to SAMHSA officials, these reports were missing because grantees lacked access to designated databases where progress reports are uploaded. Specifically, SAMHSA’s database—COMET—went offline in December 2014 and was replaced by the DFC Management and Evaluation Tool (DFC Me), which became operational in February 2016. SAMHSA officials said they did not request progress reports from grantees during the transition period between Coalition Online Management and Evaluation Tool (COMET) and DFC Me, which lasted approximately 18 months. However, they were aware that ONDCP’s contractor provided DFC grantees with an electronic template that allowed the grantees to collect data and document their progress as required, outside of the designated databases. In February 2016, ONDCP requested all grantees to input their progress report information into DFC Me, to include information covering the transition period. Prior to that date however, SAMHSA staff did not request this information from grantees and therefore, did not have it to conduct their monitoring efforts. SAMHSA officials acknowledged that their staff did not conduct the necessary follow up to ensure that the files were stored in the updated system for record keeping purposes. SAMHSA officials also acknowledged that they were not aware of the number of reports from grantees that were omitted from the record. Of the 26 grantees that should have had continuation application packages in their official grantee files, 9 were missing the required Federal Financial Report. The Federal Financial Report is a document in which the grantee details its expenditures, disbursements, and cash receipts. SAMHSA requires grantees to submit the Federal Financial Report within 90 days of the fiscal year’s end. SAMHSA officials acknowledged that the Federal Financial Reports were likely missing because the grantees did not provide them and SAMHSA staff did not follow up to obtain them. When we discussed this finding with SAMHSA officials, they were unaware of the omission. Of the 18 grantees that should have had sustainability plans in their official grantee files, 14 were missing the required plans. These sustainability plans are to outline how the grantee intends to maintain the resources necessary to achieve its long-term goals and continued progress after exiting the DFC Support Program. According to SAMHSA officials, 12 of the 14 grantees were missing the sustainability plans because SAMHSA staff had not uploaded them to the system of record, and the remaining 2 grantees never submitted them and the grants management specialist did not follow up to obtain the document. SAMHSA officials were unaware of these omissions. SAMHSA’s policies and procedures require that the official award file contain the formal, complete record of the history of an award, such as documents that support verification of statutory eligibility. SAMHSA officials also told us that its staff is responsible for ensuring that all files are stored in a shared database system, in accordance with SAMHSA policy. In addition, the IAA between ONDCP and SAMHSA outlines that SAMHSA is responsible for ensuring grantees have submitted all required documents, following up as needed, and reporting to ONDCP on grantees’ status. According to ONDCP and SAMHSA officials, SAMHSA did routinely relay reports to ONDCP on grantees’ status. However, SAMHSA officials acknowledged that they were unaware that files were missing, which calls into question the effectiveness of their program monitoring. It also challenges the accuracy and validity of the grantee status reports that SAMHSA provided to ONDCP. In particular, ONDCP officials told us they were never made aware that any required grantee documents were not included in the official record. SAMHSA officials said that in March 2016, to strengthen their grants administration process, they instituted a new internal review process in which they randomly select 50 grant files per month from their various grant programs to assess the completeness and accuracy of grantees’ documentation. As part of the internal review process, SAMHSA has also taken other steps including enhancing training, developing policies and procedures, and implementing a step-by-step guide and training for grant managers on entering information into the official grant file. While this review process had not included the DFC Support Program initially, in November 2016, while our audit work was nearing completion, officials said they were planning to expand their focus to incorporate all of SAMHSA’s grant programs. While officials provided information on how they planned to address particular deficiencies on a case by case basis, they did not explain how they planned to ensure systemic remediation of any issue found repeatedly or a timeframe for implementing the changes. According to Standards for Internal Control in the Federal Government, managers are to (1) promptly evaluate findings from audits and other reviews, including those showing deficiencies and recommendations, (2) determine proper actions in response to these findings and recommendations, and (3) complete, within established time frames, all actions that correct or otherwise resolve the matters brought to management’s attention. Ensuring this new review process is fully implemented in a sustainable manner will be critical for SAMHSA as it aims to strengthen DFC grants monitoring. Further, developing an action plan with time frames for addressing any deficiencies it finds through its reviews and making systemic changes to mitigate deficiencies on a prospective basis will also help its management of the DFC program. Further, Standards for Internal Control in the Federal Government states that all transactions and other significant events need to be clearly documented and should be readily available for examination. For the DFC Support Program, such transactions include SAMHSA’s collection, storage, and review of grantee documentation. It also includes SAMHSA’s documentation and reporting of grantee status to ONDCP. Additionally, control activities—policies, procedures, and mechanisms— should enforce management’s directives and help ensure that actions are taken to address risks, such as ensuring DFC grantees are not remiss in meeting statutory and program requirements. Related, internal controls for monitoring should generally be designed to assure that ongoing monitoring occurs in the course of normal operations. This includes regular management and supervisory activities, comparisons, and reconciliations. Deficiencies found during ongoing monitoring or through separate evaluations should be communicated to the individual responsible for the function and serious matters should be reported to top management. Control activities like these should be an integral part of an entity’s accountability for stewardship of government resources. SAMHSA’s lack of follow up with grantees and its lack of visibility over omitted documents resulted in incomplete and inaccurate documentation of status in its reports to ONDCP. Without a method to ensure that grantee status reporting to ONDCP is complete and accurate, SAMHSA cannot be certain that grantees are engaging in intended activities, that funds are being used in accordance with program requirements, and that all other statutory requirements and grant program terms and conditions have been met. Since 2008, ONDCP and SAMHSA have taken steps to improve the DFC Support Program by employing leading collaboration practices and funding a variety of drug prevention activities. However, SAMHSA’s inconsistent adherence to some procedures, particularly with respect to grantees that are funded year after year, has resulted in the persistence of missing or incomplete documentation in the official grantee files, which has limited performance monitoring. Developing an action plan that includes time frames for addressing deficiencies found through its grant file reviews and making systemic changes based on its findings, as well as developing a method for ensuring complete and accurate grantee status reporting to ONDCP, would position SAMHSA officials to further strengthen monitoring efforts. As the number of youth who engage in illicit drug use remains a public health concern, the continued focus on funding grantees and monitoring them for both progress and compliance is vital. To better ensure grantees’ compliance with the Drug-Free Communities Support Program’s statutory requirements and to strengthen monitoring of grantee activities, we recommend that SAMHSA take the following two actions: develop an action plan with time frames for addressing any deficiencies it finds through its reviews and making systemic changes to mitigate deficiencies on a prospective basis to strengthen the grant monitoring process, and develop and implement a method for ensuring that the grantee status reports it provides to ONDCP are complete and accurate. ONDCP and the Department of Health and Services (HHS) provided written comments on a draft of this report. In their comments, both agencies concurred with our recommendations. In HHS’s written comments, reproduced in appendix III, HHS stated SAMHSA will implement a targeted review focusing on the DFC files and strengthen its grants management processes to ensure that the reports it provides to ONDCP are timely and accurate. HHS also provided technical comments which we incorporated where appropriate. We are sending copies of this report to the Director of the Office of National Drug Control Policy, the Secretary of Health and Human Services, and appropriate congressional committees and members, and other interested parties. In addition, this report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions, please contact Diana Maurer at (202) 512-8777 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made significant contributions to this report are listed in appendix IV. The Office of National Drug Control Policy’s (ONDCP) Drug-Free Communities (DFC) Support program requires grantees to report on four core measures in progress reports semiannually as listed in table 4. Grantees collect these core measure data and ONDCP provides it to its contractor for national-level evaluation and reporting. For example, in the National Evaluation of Drug-Free Communities Support Program Summary of Findings through 2014, the percentage change in the past 30 days of drug use among middle school and high school youth is evaluated and reported bi-annually. Figure 6 shows the trend in drug use from 2002 through 2014. Since the program added prescription drugs to the core measures in 2012, figure 7 captures the prevalence of prescriptions drugs in comparison to alcohol, tobacco, and marijuana in fiscal year 2013. In addition to the contact named above, Joy Booth (Assistant Director), Aditi Archer (Analyst-in-Charge), David Alexander, Lyle Brittain, Willie Commons III, Dominick Dale, Eric Hauswirth, Anna Maria Ortiz, Jeffrey Paulk, and Justin Snover made key contributions to this report. | In 2015, approximately 2.2 million adolescents aged 12 to17 were current users of illicit drugs. The Drug-Free Communities Act of 1997 established the DFC Support Program—a federal grant program supporting drug abuse prevention efforts that engage schools, law enforcement, and other sectors of a community. The program targets reductions in the use of alcohol, tobacco, marijuana, and the illicit use of prescription drugs. The Office of National Drug Control Policy Reauthorization Act of 2006 includes a provision that GAO routinely assess ONDCP's programs and operations. This report addresses: (1) the extent to which ONDCP and SAMHSA use leading practices to coordinate program administration and the types of activities funded; and (2) the extent to which ONDCP's and SAMHSA's' operating procedures both ensure DFC grantees comply with governing statutes and provide a basis for performance monitoring. To conduct this work, GAO analyzed agency policies from 2013-2015 (most recent available); interviewed agency officials; and analyzed coordination efforts against relevant key practices GAO identified previously. GAO reviewed files obtained from a non-generalizable random sample of 30 grantees and interviewed a random subset of 10. The Office of National Drug Control Policy (ONDCP) and the U.S. Department of Health and Human Services' Substance Abuse and Mental Health Services Administration (SAMHSA) employ leading collaboration practices to administer the Drug Free Communities (DFC) Support Program and have funded a range of drug prevention activities. Both agencies have improved their collaboration since GAO last reported on the DFC program in 2008. Their current efforts to jointly manage the DFC Support program are consistent with GAO's relevant key collaboration practices. For example, ONDCP and SAMHSA defined and agreed upon common outcomes, such as prioritizing efforts to increase participation from under-represented communities. The two agencies have also funded a range of DFC grantees' activities and report on these activities in their annual evaluation reports. For example, ONDCP reported that from February through July 2014, grantees educated more than 156,000 youth on topics related to the consequences of substance abuse. To illustrate, the 10 grantees GAO interviewed described their specific efforts, including programs for Enhancing Skills: To enhance the skills of those in the community, one grantee sponsored a session for local realtors on precautions to take when preparing for open houses—warning them that leftover prescriptions in medicine cabinets present the potential for abuse among those walking through the home for sale. Enhancing Access/Reducing Barriers: To reduce cultural barriers, another grantee developed signs with text in the multiple languages spoken throughout the community that shopkeepers could display to emphasize the legal purchase age for alcohol. Providing Information: To provide greater information to parents on the consequences of providing alcohol in their homes, a grantee created a slogan to remind parents of their children's needs, “Be my Parent, not my Bartender.” The agencies have operating procedures in place, but could enhance grantee compliance and performance monitoring. In particular, SAMHSA does not consistently follow documentation and reporting procedures to ensure grantees' compliance with governing statutes. SAMHSA also has not been accurately reporting to ONDCP on grantee compliance. Specifically for the files GAO reviewed, SAMHSA followed all processes for ensuring that initial applicants had submitted required documentation before awarding them initial grant funding. However, SAMHSA was less consistent in adhering to procedures for confirming documentation in later years of the program. For example, 14 of the 18 grantees that should have had sustainability plans in their files did not. These plans outline how the grantee intends to maintain necessary resources to achieve long-term goals after exiting the program. Prior to GAO's review, ONDCP and SAMHSA officials were not aware of the missing data in the grant files. Without close adherence to existing procedures, and a mechanism to ensure that the documentation it reports to ONDCP is accurate and complete, SAMHSA's performance monitoring capacity is limited and it cannot be certain that grantees are engaging in intended activities and meeting long-term goals. GAO recommends that SAMHSA develop an action plan with time frames to strengthen DFC grant monitoring and ensure it sends complete and accurate information to ONDCP. SAMHSA concurred with these recommendations and identified actions to address them. |
CMS monitoring and oversight activities of MA organizations’ compliance with marketing requirements, include maintaining regular communication with and providing technical assistance to MA organizations. In addition, CMS conducts surveillance activities and audits of MA organizations to collect information about potential problems and compliance with marketing requirements. CMS’s surveillance activities include tracking and analyzing complaint rates by MA organization and category of complaint. In 2007, CMS initiated a variety of new surveillance activities focused on monitoring MA organizations’ marketing of PFFS plans. Among other activities, CMS implemented a secret shopper program that had CMS representatives, with their identity concealed, attend PFFS marketing events and report on the accuracy of marketing presentations and agents’ compliance with marketing requirements. Other surveillance activities included monthly review of PFFS enrollment packages and review of agent training test results. CMS also tracks other indicators under certain circumstances, such as verifying that beneficiaries willingly and knowingly chose certain plans. For example, for 2008, CMS required MA organizations to call beneficiaries newly enrolled in PFFS plans to verify that beneficiaries wanted to enroll in the plan and understood plan features. CMS subsequently tracked the proportion of verified calls as one of its marketing performance indicators. In conducting audits of MA organizations, CMS assesses whether organizations’ operations are consistent with federal laws, regulations, and CMS policies and procedures in some or all of seven major categories, including marketing. Audits typically involve a combination of desk reviews of documents submitted by MA organizations, and, at CMS’s discretion, site visits. CMS uses a risk-based approach to identify MA organizations for audit. While CMS may choose to audit only certain categories in any given year, since at least 2006, CMS has included marketing operations, and specifically those related to misleading marketing, in its audits. CMS also conducts focused, or out-of-cycle, audits of MA organizations to ensure that MA organizations implemented new processes for previously identified areas of noncompliance and to investigate potential noncompliance issues that CMS identified outside of the audit cycle. In June 2008, CMS reorganized its internal structure for overseeing MA plans and established standard operating procedures (SOP) for the oversight of MA organizations. The 2008 SOPs outlined the agency’s oversight approach and clarified, among other things, what actions CMS may take when MA organizations were found to be out of compliance with marketing and other requirements. According to CMS officials, the SOPs formalized many procedures that CMS was already using to oversee MA organizations and were intended to ensure that these procedures were being applied in a uniform manner nationwide. CMS’s SOPs state that CMS is to consider the nature of each violation in determining the appropriate compliance or enforcement action. The 2008 SOPs include the following actions from least to most severe: Informal contact: phone call, e-mail, or meetings with MA organization officials to provide technical assistance. Compliance: initial notice of noncompliance—e-mail to the MA organization, usually through the MA organization’s compliance officer. An initial notice of noncompliance is generally issued at the first finding of relatively minor noncompliance with federal laws, regulations, or CMS guidance, such as a single instance of inappropriate marketing activities. The notice informs an MA organization that it is out of compliance and directs MA organizations to reply to the email to indicate how it will address the noncompliance. Compliance: warning letter—formal letter to the MA organization’s compliance officer stating the concern or area of noncompliance that requires immediate remedy for a limited and quickly fixable situation. CMS also notifies MA organizations that continued noncompliance will lead to stricter actions by CMS, such as requiring the MA organization to develop a CAP. Compliance: CAP request letter—formal letter to the MA organization’s chief executive officer stating the concern(s) and requiring the organization to develop and implement a CAP for the specific violation(s). CMS can require CAPs from MA organizations when the agency identifies noncompliance that generally affects multiple beneficiaries and represents an ongoing or systemic inability to adhere to Medicare requirements. CMS’s SOPs provide time frames for CMS and MA organizations to respond, accept, and implement CAPs. CAPs are reported publicly on the CMS Web site. Enforcement: sanctions provided for under federal law that CMS may impose on MA organizations for what CMS considers egregious or sustained noncompliance and for specific violations, including misrepresenting or falsifying information to CMS, beneficiaries, or potential beneficiaries, or substantially failing to carry out the terms of their contracts with CMS. Sanctions may include civil money penalties; the suspension of plans’ marketing activities, enrollment, or Medicare payment; or termination or nonrenewal of organizations’ contracts with CMS. Suspensions of plans’ marketing activities, enrollment, or Medicare payment are to remain in place until CMS is satisfied that the noncompliance that served as the basis for the suspension has been corrected and is not likely to recur. For more serious violations, CMS may choose to forgo initial, less formal actions against an MA organization in favor of stricter actions, including later-stage compliance or enforcement actions. However, the SOPs indicate that compliance matters will generally escalate through the compliance process in a step-by-step manner, starting with the initial notice of noncompliance up through the CAP stage. CMS has also chosen to negotiate voluntary suspensions with MA organizations rather than go through formal processes to impose involuntary sanctions. According to CMS officials, voluntary suspensions can result in a faster intervention. If CMS makes the determination that the MA organization has engaged in certain fraudulent activity, the agency is to refer the violation to the Department of Health and Human Services Office of Inspector General (HHS OIG) for review. From January 2006 through February 2009 CMS took a range of compliance and enforcement actions against at least 73 MA organizations for inappropriate marketing. While the number of MA organizations varied during the approximately 3-year period, 192 MA organizations offered MA plans as of March 2009. The exact number of MA organizations that were subject to an action could be higher. According to CMS, the agency did not begin tracking two types of action—initial notices of noncompliance and warning letters—until June 2008. From June 2008 through February 2009, CMS sent one initial notice of noncompliance and 76 warning letters to MA organizations. (See app. II for more information about the types of inappropriate marketing that resulted in initial notices of noncompliance and warning letters.) From January 2006 through February 2009, CMS required 37 CAPs from MA organizations and also took 5 enforcement actions for inappropriate marketing—3 marketing and enrollment suspensions and 2 civil money penalties. The 73 MA organizations against which CMS took compliance or enforcement actions enrolled approximately 7.4 million beneficiaries through February 2009. (See table 1.) These beneficiaries represented about 71 percent of all MA beneficiaries. CMS also negotiated voluntary suspensions of marketing and enrollment activities for PFFS plans with seven of these MA organizations effective June 2007. In some cases, during the period from January 2006 through February 2009, CMS took multiple types of actions against the same MA organizations because it determined that these organizations had more than one inappropriate marketing violation. (See app. III for information about the criteria CMS uses to make compliance and enforcement decisions.) Nineteen of the 73 MA organizations subject to compliance or enforcement actions had multiple types of actions taken against them. Fifteen of the 19 organizations received at least one warning letter or notice of noncompliance and a CAP. Two MA organizations received at least one warning letter or notice of noncompliance, were required to submit at least one CAP, and were subject to an enforcement action. One organization received at least one warning letter or notice of noncompliance and was subject to an enforcement action and another was required to submit at least one CAP and was subject to an enforcement action. The time it took for MA plans to implement CAPs varied widely and changed over time. In May 2008, CMS revised its audit SOPs to generally require MA organizations to fully implement CAPs within 90 days from CMS’s acceptance. Consequently, the average time from when CAPs were requested to when corrective actions were fully implemented decreased for CAPs accepted after May 2008. Specifically, corrective actions for inappropriate marketing deficiencies were fully implemented an average of 218 days after CAPs were requested for the 22 CAPs accepted from January 2006 through April 2008 and an average of 174 d for the 13 CAPs accepted from May 2008 through February 2009. The period of time from when CMS requested the CAP to when CMS accepted the CAP increased, on average, from 90 days for CAPs accepted prior to May 2008 to 145 days for CAPs accepted May 2008 through February 2009. However, the average time from CMS acceptance of the CAP to when corrective actions were fully implemented decreased from 128 days for CAPs accepted from January 2006 through April 2008 to 29 days for CAPs ays accepted from May 2008 through February 2009. (See fig. 1.) Overall, the period of time from when CMS requested CAPs to when MA organizations fully implemented them varied widely for CAPs accepted both for the period from January 2006 through April 2008 (from 61 to 410 days) and from May 2008 through February 2009 (from 68 to 345 days). For enforcement actions, the implementation time periods varied widely as well—from approximately 2 weeks to 2 years before CMS took an enforcement action after first identifying what it considered to be inappropriate marketing. For example, CMS suspended the marketing and enrollment activities of one MA organization based on results of CMS’s secret shopper activities that showed what CMS considered egregious inappropriate marketing approximately 2 weeks after the MA organization began marketing its MA plans. In contrast, in February 2009, CMS suspended the marketing and enrollment activities for another MA organization after 2 years of sustained noncompliance with marketing requirements. During the 2-year period, CMS determined that this MA organization employed agents who had engaged in activities that misled, confused, or misrepresented the organization or its MA plans to beneficiaries during three audits conducted between March 2007 and July 2008, for which CMS required CAPs. In addition, in July 2008, CMS sent the MA organization a notice of noncompliance based on beneficiary allegations of inappropriate marketing by agents selling its MA plans. (See table 2 for a summary of the five cases for which CMS took enforcement actions.) CMS assisted beneficiaries who experienced inappropriate marketing by helping them restore their previous health insurance coverage or enrolling them in another option. Some beneficiaries experienced financial liability or access-to-care problems as a result of being enrolled in an MA plan or stemming from their disenrollment and enrollment in prior or different coverage. CMS assisted MA beneficiaries who experienced inappropriate marketing by MA organizations by providing special election periods (SEP), which enable beneficiaries to disenroll from their MA plan outside of the regular enrollment periods and to enroll in prior coverage or another option, such as another MA plan, Medicare FFS, or a stand-alone Medicare prescription drug plan. CMS announced that it had established a special SEP for inappropriate marketing in a July 2007 memo to MA organizations. CMS officials described these SEPs as their primary attempt to make beneficiaries who experienced inappropriate marketing “whole.” According to CMS’s SOPs, MA beneficiaries qualify for the SEP if they call 1-800-Medicare or contact a CMS regional office and give reasonable assurance that they were subject to inappropriate marketing; CMS does not require the beneficiaries to provide evidence. According to CMS officials, they decide whether to provide SEPs before investigations into inappropriate marketing allegations are complete to ensure that beneficiaries who did experience misleading marketing can disenroll from the MA plan and enroll in other health coverage as quickly as possible. Consequently, some of the beneficiaries who were provided a SEP might not have been subject to inappropriate marketing. CMS offered MA beneficiaries either a prospective or retroactive SEP. Under a prospective SEP, beneficiaries could disenroll from the MA plan and return to their previous coverage or enroll in another option effective the first day of the next month. CMS officials told us that the customer service representatives at 1-800-Medicare generally processed prospective disenrollments and enrollment in other MA plans or Medicare FFS. Under the retroactive SEP, beneficiaries could disenroll from the MA plan and return to their previous coverage or enroll in other coverage effective as early as the date of their enrollment in the MA plan. Retroactive SEPs are more complicated because they require payment adjustments for any premiums paid and medical services received while the beneficiary was enrolled in the plan. Retroactive SEPs are processed by regional office staff. According to CMS’s SEP SOPs, 1-800-Medicare customer service representatives should ask beneficiaries who provide reasonable assurance that they were subject to inappropriate marketing whether they would like to prospectively disenroll from their plan and enroll in new coverage. If the beneficiary agrees, the disenrollment from the MA plan and enrollment in new coverage is handled by the customer service representative. If the beneficiary requests a retroactive SEP, the case is forwarded to a regional office. CMS’s SEP SOPs state that regional office officials are required to explain the consequences of a retroactive disenrollment or enrollment to the beneficiary before such a disenrollment is processed. If a beneficiary directly contacts a regional office with a complaint of inappropriate marketing, officials should offer beneficiaries a prospective SEP, although they may offer a retroactive SEP if the beneficiary insists. While CMS’s SOPs indicate a preference for offering beneficiaries a prospective SEP, CMS officials we interviewed said that whether CMS offers a prospective or retroactive SEP to a beneficiary depends on what would be in the beneficiary’s best interest. The most suitable SEP for a MA beneficiary depended on the beneficiary’s circumstances. For example, if beneficiaries used services while enrolled in an MA plan, they could benefit from a prospective SEP if their cost sharing—the amount they paid out-of-pocket for a covered service—under the MA plan was lower than it would be under their restored or other coverage. If these beneficiaries chose a retroactive SEP, they would have to make up the difference between the lower cost sharing amount under the MA plan and the higher amount under their restored or other coverage. Conversely, beneficiaries who chose restored or other coverage with cost sharing that was lower than the MA plan could benefit from disenrolling retroactively because they would be reimbursed for the difference between the out-of-pocket costs they incurred for services received under the MA plan and the lower costs under the restored or other coverage. Some DOI and SHIP officials we interviewed told us that MA disenrollments and enrollments resulting from inappropriate marketing generally appeared to go smoothly, but that some beneficiaries experienced problems. Officials from some DOIs and a SHIP we interviewed said some beneficiaries’ retroactive disenrollments took several months to process. Officials from one DOI said that some beneficiaries received bills from collection agencies because provider reimbursements associated with retroactive disenrollments were not timely. An official from another DOI said that it could take from 10 to 30 days to receive enrollment material, including a plan identification card, and this could cause access-to-care problems if beneficiaries needed health care services before the material arrived. To help mitigate any access-to-care problems, CMS officials said that they instructed 1-800- Medicare customer service representatives to give beneficiaries their MA plans’ contact information when they enrolled so that beneficiaries could contact the plans directly for information about accessing services. Additionally, CMS officials stated that CMS regional office employees routinely worked with MA organizations to ensure that beneficiaries who received a retroactive SEP could access services prior to receiving plan identification cards. CMS officials told us that some of the problems encountered by MA beneficiaries receiving a retroactive inappropriate marketing SEP were unavoidable and inherent to the processing of a retroactive disenrollment. They noted that under a retroactive SEP, different premium amounts needed to be collected, provider bills and payments might need to be retracted and reprocessed by the new insurer, and different cost-sharing amounts applied. Therefore, a SEP could take time to be fully processed. These officials stated that the administrative actions and associated problems that beneficiaries may experience are preferable to keeping beneficiaries in an MA plan after they have stated that they experienced inappropriate marketing. Beneficiaries who stated they experienced inappropriate marketing may also have experienced problems that the SEP could not address. For example, these beneficiaries could have experienced financial or access- to-care issues prior to receiving the SEP. CMS, DOI, and SHIP officials described cases in which beneficiaries did not realize they had been switched to an MA plan until they tried to access services. These officials said some of the beneficiaries experienced disruption of their access to providers and medications because their providers did not participate in the MA plan. DOIs and SHIPs also cited several other problems the inappropriate marketing SEP could not resolve because the problems were associated with private or state employee insurance plan provisions or involved other government agencies, and hence were outside CMS’s jurisdiction. DOIs and SHIPs provided information about specific types of cases that a SEP could not resolve: A beneficiary had to pay higher premiums to obtain the same Medigap polices that she had dropped when she was enrolled in a MA plan. Beneficiaries could not have coverage restored by their prior employer’s retiree health plan. DOI and SHIP officials said that employer retiree health plans are generally not required to restore coverage to MA beneficiaries who stated they experienced inappropriate marketing. CMS officials said that they were able to get retiree health coverage restored for some beneficiaries, but not for others. Beneficiaries who had premiums withheld from their Social Security checks experienced delays in ending the withholding after they were disenrolled from their MA plan, which could have caused financial hardships. The information CMS has on the number of beneficiaries affected by inappropriate marketing is limited for two reasons. First, some beneficiaries who experienced inappropriate marketing may have exercised their option—available during certain times of the year—to disenroll from their MA plan and might not have notified CMS of the marketing problems they encountered. Second, CMS did not directly track the number of beneficiaries who contacted the agency and were provided a SEP. CMS did estimate the number of SEPs it provided for inappropriate marketing, but its estimates were based on data that were unreliable. All MA beneficiaries, including those who had been affected by inappropriate marketing, may have elected to change their health plans during the annual coordinated election period or the annual open enrollment period. CMS had the information to determine the number of beneficiaries who disenrolled during these regular enrollment periods, but during the time of our study, the agency did not collect information that would have allowed it to determine the extent to which beneficiaries disenrolled from health plans as a result of inappropriate marketing. Disenrollment rates varied considerably among plans and types of plans. For example, we previously reported on disenrollment rates in PFFS plans occurring during the regular enrollment periods for 2007. PFFS plans were considered by CMS and others to have high rates of inappropriate marketing. From January through April 2007, when the disenrollments took effect, about 169,000 beneficiaries in PFFS plans, or 21 percent of the total number of PFFS beneficiaries, disenrolled from the plan that they were enrolled in. This 4-month total was more than double the disenrollments from other plan types during that same time period. However, the number of these beneficiaries who changed plans because they were affected by inappropriate marketing was unknown because CMS did not have data on why these beneficiaries disenrolled. After 2005, CMS discontinued a survey on disenrollment reasons that provided information on the frequency of certain problems leading to disenrollment. From 2000 to 2005, CMS conducted an annual survey asking MA beneficiaries who disenrolled why they left their plan. Among the disenrollment reasons that beneficiaries could have chosen was: “Given incorrect or incomplete information at the time you joined the plan.” A 2005 analysis prepared by CMS contractors of survey results from 2000 through 2003 found that over this time period, the percent of beneficiaries who said they disenrolled because they were given incorrect or incomplete information at the time they joined their plan ranged from about 9 to about 11 percent. However, in each of the 3 years, less than one percent of beneficiaries who responded to the survey stated that this was the most important reason for their disenrollment. The survey did not collect information on the problems beneficiaries experienced as a result of the reasons that led to their disenrollment or the disenrollment itself. The analysis prepared by CMS contractors noted that the survey’s primary goals were to enhance CMS’s ability to monitor MA plan performance and assist plans in identifying areas where they might focus their quality improvement efforts. CMS officials said that they plan to reinstitute a survey on disenrollment reasons in late summer 2010. CMS officials plan to collect data over a 9- to 12-month period, so final results should be available sometime in 2011. After the survey ends and results are analyzed, CMS will determine whether to conduct additional surveys on disenrollment reasons. CMS did not directly track the number of SEPs it provided, but instead estimated the number based on information collected in its complaint tracking module. Complaints from beneficiaries who stated they experienced inappropriate marketing and wanted to disenroll from their MA plans were classified into one of two categories in CMS’s complaint tracking module. One category was for inappropriate marketing cases that required regional office action to complete beneficiaries’ disenrollment and enrollment in another plan. According to complaint tracking module data provided by CMS, during the 17-month period from June 2007 through October 2008, CMS received 18,331 such complaints. CMS officials said that most of these cases were retroactive SEP requests. The second category was for inappropriate marketing cases that did not require regional office action to complete beneficiaries’ disenrollment and enrollment in another plan. During the 7-month period from April 2008 through October 2008, CMS received 1,689 inappropriate marketing complaints that the agency determined did not need regional office action. According to CMS officials, cases included in the second category were primarily prospective SEP requests. CMS officials told us that most of the complaints in these two categories resulted in an inappropriate marketing SEP but that the total included some beneficiaries who made such statements but did not disenroll. However, the complaint data were not a reliable source of information on the number of beneficiaries who received the SEP. A study conducted by a CMS regional office of a sample of about 170 complaints lodged between August 2007 and January 2008 highlighted inaccurate and incomplete documentation as well as a portion of inappropriate marketing complaints that had been miscategorized: About 33 percent of cases were resolved or closed inappropriately or involved duplicate cases. For example, some cases were closed prior to final resolution. In one of these cases, the MA organization indicated in case notes that a beneficiary was experiencing a problem with reimbursement and that it was doing additional research on the issue. However, the MA organization closed the case prior to completing its research on whether the beneficiary was due reimbursement and, if so, whether the beneficiary received it. Four of the 54 cases in this category were for instances in which multiple cases were open for the same member and issue. About 28 percent of case resolutions were poorly documented. Most frequently these cases contained notes in the complaint tracking module that did not indicate when disenrollments took effect or addressed part of the complaint but notes did not reflect that all aspects of the complaint had been resolved. About 20 percent of cases were incorrectly categorized. The majority of miscategorized cases were inappropriate marketing complaints that were coded in categories other than inappropriate marketing. About 12 percent of cases lacked specific information about at least one issue involved in the complaint, such as details about refunds or payments owed to the beneficiary. Officials from another CMS regional office conducted a more informal examination of complaint tracking module cases in 2008 and found similar problems. The officials told us that the regional office did a spot check of 50 inappropriate MA marketing complaints by calling the beneficiaries and determined that the notes in the complaint tracking module often did not match the description of the complaint that the beneficiary provided during the follow-up call. The officials also said that CMS staff examined complaints against one MA organization and found cases of alleged inappropriate marketing that were not categorized as such. The officials from this regional office estimated that they had recategorized 60 percent of all complaint tracking module cases within this region. However, other CMS regional offices said the percentage of cases that needed to be recategorized was small. The reason for this disparity is unclear, but it may be due to how regional offices determined whether cases needed to be recategorized. CMS officials told us that they used the results of internal studies of the complaint tracking module to improve their ability to categorize complaints. However, it was beyond the scope of our study to determine the effectiveness of these changes. Inappropriate marketing can adversely affect MA beneficiaries, causing financial hardship and difficulty in accessing needed care. While CMS has used SEPs to assist beneficiaries, the agency was unable to prevent some of them from experiencing negative consequences. Currently, CMS has limited information on the extent of inappropriate marketing and the number of beneficiaries affected. The agency intends to conduct a survey of beneficiaries who disenrolled from MA plans and ask about their reasons for disenrollment. Depending on the specific questions included, such a survey could provide information about the number of beneficiaries who experience inappropriate marketing and identify plans, plan types, and geographic locations where inappropriate marketing problems are most prevalent. CMS’s information about the extent of inappropriate marketing is also limited because the agency has not gathered reliable information about the number of prospective and retroactive SEPs provided for this reason. Without an investigation into individual cases, CMS cannot determine whether all of the problems reported by beneficiaries represent inappropriate marketing. Nonetheless, gathering information on the reasons beneficiaries disenroll from their MA plans and tracking the number of the SEPs that the agency provides would enable the construction of useful indicators of the potential scope and location of the marketing problems. Because of the potentially serious implications for beneficiaries as a result of inappropriate marketing, it is important for CMS to have information that can inform the agency’s oversight efforts and help it to appropriately target interventions when necessary. To improve CMS’s oversight of MA organizations and its ability to appropriately target interventions, we recommend that the Administrator of CMS gather more information on the extent of inappropriate marketing and the types of problems beneficiaries experienced as a result of inappropriate marketing. As part of this effort, CMS should directly track retroactive and prospective SEPs provided for inappropriate marketing. We provided a draft of this report for comment to HHS, the department under which CMS resides. Responding for HHS, CMS stated that it concurred with our recommendation and that it would assess the costs and benefits of alternative systems that could be used to collect information on the extent of inappropriate marketing and the types of problems beneficiaries experience as a result. CMS also stated that while it did not directly track the number of retroactive and prospective SEPs provided for inappropriate marketing, it used data from its complaint tracking module and considered that data a reasonable proxy of the total number of SEPs requested for inappropriate marketing. As our report notes, findings from a formal and informal study conducted by two CMS regional offices demonstrated that data from the complaint tracking module were not a reliable source of information on the number of beneficiaries who received a SEP. However, CMS officials told us in an interview that they used the results of these studies to improve their ability to categorize complaints. It was beyond the scope of our study to determine the effectiveness of these changes. CMS also stated in its comments that it had taken additional steps in 2009 to protect beneficiaries from deceptive marketing practices conducted by agents, including establishing stronger rules for governing the commissions that can be paid to independent sales agents, disseminating new marketing guidelines about how MA plans identify themselves to beneficiaries, and expanding its secret shopper program. (CMS’s comments are reprinted in app. IV.) CMS provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies to the Administrator and interested congressional committees. We will also make copies available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. This appendix describes in detail the scope and methodology we used to address the report objectives. We briefly summarize the methodologies by objective and then discuss for all objectives (1) our review of relevant federal laws, regulations, and guidance from the Centers for Medicare & Medicaid Services (CMS), including policies and procedures; (2) interviews with CMS officials and other stakeholders; and (3) CMS data. To determine the extent to which CMS has taken compliance and enforcement actions against Medicare Advantage (MA) organizations for inappropriate marketing, we analyzed CMS data on the number and types of corrective and enforcement actions taken against MA organizations for inappropriate marketing. We conducted an analysis of noncompliance and warning letters from CMS notifying MA organizations about marketing violations, such as providing inappropriate information to beneficiaries at marketing events, or agent-related operational violations, such as those related to agent compensation. We excluded from our analysis letters for non-agent-related violations such as incorrect information on an MA organization’s Web site. We analyzed corrective action plans (CAP) that CMS required if the agency determined that the MA organization had engaged in activities that materially misled, confused, or misrepresented the MA organization to beneficiaries. To determine how CMS helped MA beneficiaries affected by inappropriate marketing and the types of problems beneficiaries encountered, we reviewed relevant agency documentation for the period January 2006 through February 2009 and interviewed officials at CMS’s central office and all 10 regional offices, 6 state departments of insurance (DOI), and 6 state health insurance assistance programs (SHIP). We also conducted site visits at the Dallas, Kansas City, and New York City CMS regional offices to interview officials from these regions more extensively. To determine what information CMS had on the number of beneficiaries affected by inappropriate marketing, we analyzed CMS’s complaint data to quantify the number of beneficiaries who complained about inappropriate marketing and requested to disenroll from their plan outside of the annual coordinated election and open enrollment periods, when beneficiaries can join, switch, or drop MA plans. We reviewed one CMS regional office’s study of CMS’s complaint tracking module conducted in February 2008. We also interviewed CMS officials about the agency’s plan to obtain information about reasons for disenrollment during the annual coordinated election and open enrollment periods. Unless otherwise noted, we limited our analysis of inappropriate marketing in this report to instances of agent-related noncompliance with marketing requirements. We reviewed relevant federal laws, regulations, and CMS guidance for the provisions related to inappropriate marketing, compliance and enforcement actions, and the time frames for which the provisions were in effect. We interviewed CMS officials about agency guidance related to oversight of the MA program, including policies and procedures, for the period of January 2006 through February 2009. We interviewed officials from CMS, state DOIs, SHIPs, and MA organizations and reviewed any documentation referenced during our interviews. In our interviews with state DOIs and SHIPs, we asked both specific and open-ended questions about the problems beneficiaries encountered and, on some occasions, interviewed officials from a state’s DOI and SHIP concurrently. Because of this, the frequency of our interviewees’ responses is not comparable. Therefore, we report these responses without reporting the total number of state DOIs or SHIPs associated with each response. In addition, state DOIs, SHIPs, and MA organizations we interviewed may not be representative of all state DOIs, SHIPs, and MA organizations, and thus the information is not generalizable to these entities. We interviewed officials from CMS’s central office and its 10 regional offices. For three regional offices (Dallas, Kansas City, New York City), we conducted our interviews during site visits. We chose the Dallas regional office because it has a high concentration of enrollment in private fee-for- service plans, a type of MA plan for which there has been a high percentage of allegations of inappropriate marketing. We chose the Kansas City regional office because it conducts detailed analyses of complaint data for the other regional offices. We chose the New York City regional office because it houses the division that coordinates CMS’s regional office MA monitoring and oversight activities. We interviewed officials from six state DOIs. We chose the Texas, Missouri, and New York DOIs because they were located in the states where we conducted site visits to CMS regional offices. We chose the Oklahoma DOI because it took enforcement action against at least one MA organization that was related to inappropriate marketing by agents. In addition, we interviewed officials from the Florida and Ohio DOIs because officials from these DOIs have testified before the National Association of Insurance Commissioners (NAIC) on inappropriate MA marketing and sales practices. We interviewed officials from 6 SHIPs, which we chose because they were located in the same states as the DOIs whose officials we interviewed. We interviewed officials from five MA organizations that CMS regional officials we spoke with identified as having had major performance problems and that had been subject to one or more CMS compliance and enforcement actions, or that had voluntarily suspended marketing and enrollment for inappropriate marketing and noncompliance with agent- related marketing requirements. The five MA organizations varied in enrollment size, ranging from fewer than 230,000 beneficiaries to more than 1 million beneficiaries. As of March 1, 2009, these MA organizations provided Medicare coverage for approximately 26 percent of all MA beneficiaries. We reported on results from a February 2008 study performed by one CMS regional office and analyzed data from CMS’s complaint tracking module and on the compliance and enforcement actions taken by the agency. We reviewed a study conducted in February 2008 by one CMS regional office that examined how complaint cases entered into the complaint tracking module were resolved by current staff in the regional office and MA organizations, and whether staff followed agency guidelines when resolving cases. The study findings were based on a content analysis performed by CMS officials of about 170 randomly selected cases that were closed in the region between August 2007 and January 2008. Because the study only reviewed complaints received by the one regional office, the results of this study are not generalizable to other CMS regional offices. In addition, we did not independently assess the accuracy of study results. Based on our review of the study’s methodology, we concluded that the study results were sufficiently reliable for our purposes. We reviewed data from the complaint tracking module that serves as CMS’s best estimate of the number of beneficiaries who received a SEP. The SEP estimates come from two categories in the complaint tracking module: inappropriate marketing complaints that required regional office action and inappropriate marketing complaints that did not require regional office action. We analyzed data on complaints that required regional office action from June 2007 through October 2008 and complaints that did not require regional office action from April 2008 through October 2008. During our interviews with CMS officials, we identified several limitations associated with the complaint tracking module data. While CMS officials told us that the agency had made improvements to its complaint tracking module, it was beyond the scope of our report to evaluate the effectiveness of these changes. On the basis of our review of the data and interviews with CMS officials, we determined that the complaint tracking module data had significant limitations. As a result, we include totals for the two complaint categories in our finding, but do not provide any additional analyses of the data. We also include a discussion of the data limitations in our finding. We analyzed data for the number and type of compliance and enforcement actions that CMS took against MA organizations from January 2006 through February 2009 for violations related to inappropriate marketing. CMS provided us with marketing-related notices of noncompliance and warning letters issued during the agency’s review of compliance letters issued from June 2008 through February 2009. We conducted a content analysis of CMS’s marketing-related compliance letters to identify those related to inappropriate marketing and noncompliance with agent-related marketing requirements. We included in our count letters that CMS sent to MA organizations for agents providing inappropriate information to beneficiaries at marketing events, engaging in prohibited activities such as providing meals to beneficiaries, and for high rates of beneficiary complaints of inappropriate marketing for which CMS considered the organizations to be outliers. We also included compliance letters for operational violations that are related to agent oversight. We excluded compliance letters for operational violations that were not agent-related such as incorrect information on MA organizations’ web sites, security breaches, and failure to issue beneficiary notices about plan changes in a timely manner. For our CAP analysis, we included those CAPs that CMS required if the agency determined that the MA organization engaged in activities that materially misled, confused, or misrepresented the MA organization to beneficiaries. We confirmed with CMS officials that violations in this category were related to inappropriate marketing by agents and that most deficiencies associated with inappropriate marketing that CMS identified fell under this audit category. This category can include deficiencies related to MA organizations’ internal operations related to agent oversight, such as agent training programs and processes for monitoring agent behavior, and deficiencies related to agent-related noncompliance with marketing requirements. However, it is possible that some instances of inappropriate marketing may have been included in violations identified in other categories. We note this limitation in the report. When MA organizations requested CAPs from multiple contracts under the same audit ID, we counted these as one CAP. Similarly, when there were multiple audits for the same parent organization that had the same date for the CAP request, we counted these as one CAP. In calculating the length of time CAPs remained open, we used the dates for when CMS accepted and closed the corrective action element for the specific inappropriate marketing deficiency, rather than the dates for which the entire CAP was accepted and closed. CMS requests CAPs to address multiple violations of agency requirements in addition to marketing; CMS may accept and close individual corrective actions for specific deficiencies under a CAP at different times. We identified enforcement actions related to inappropriate marketing and noncompliance with agent-related marketing requirements by performing a content analysis of the enforcement actions listed on CMS’s Web site. We reviewed the sanction letters CMS sent to these organizations or interviewed agency officials to determine that the enforcement actions we had identified were related to inappropriate marketing. Based on our review of the data and interviews with CMS officials, we concluded that the compliance and enforcement action data were sufficiently reliable for our purposes. In June 2008 CMS began tracking the number of compliance letters— initial notices of noncompliance and warning letters sent to MA organizations. For the period of study, CMS issued one initial notice of noncompliance and 76 warning letters for various instances or areas of noncompliance. (See table 3.) During the 2008 annual election period, CMS conducted multiple surveillance activities: analyzing rates of beneficiary allegations of agent-related noncompliance in the agency’s complaint tracking module, and secret shopping of MA organizations marketing events and customer call centers. CMS also reviewed MA organizations’ compliance with required agent commission limits. CMS then sent the letters as a result of its findings from its surveillance activities and its review of compliance with required agent commission limits. CMS’s MA organization account management SOPs provide general guidelines for when a compliance action may be appropriate: the MA organization has engaged in an activity that is egregious in nature, the MA organization has demonstrated sustained poor performance over a period of time, the issue involves a large number of MA beneficiaries, or the issue raises significant compliance concerns, such as the MA organization not meeting certain contractual requirements. CMS has implemented its general guidelines for compliance actions such that the agency has taken such actions based on specific oversight activities for which it has explicit criteria. Specifically, the agency issued the majority of notices of noncompliance and warning letters based on MA organizations’ noncompliance with required agent commission limits or the results of surveillance activities (such as analysis of inappropriate marketing complaints and secret shopper activities). CMS required the majority of CAPs based on the results of audits of MA organizations. Required agent commission limits, surveillance activities, and audits all have explicit criteria for assessing compliance with agency requirements. For example, of the 76 warning letters CMS sent to MA organizations for inappropriate marketing practices, 44 were based on noncompliance with required agent commission limits and 30 were based on the results of surveillance activities. CMS regulations published in September 2008 require MA organizations to establish reasonable agent commission limits; CMS sent warning letters to those MA organizations that had commission limits that the agency determined were unreasonable. As part of the surveillance activities, during the 2008 annual election period, CMS analyzed the rates of inappropriate marketing complaints and sent warning letters to those MA organizations that met the criteria of having more than 15 complaints per 1,000 beneficiaries. During the same period, CMS sent warning letters to those MA organizations that met the criteria of committing one or more violations at secret shopper events, based on specific marketing guidelines that the agency developed. Similarly, 36 of 37 CAPs for deficiencies associated with inappropriate marketing were required by CMS based on audit findings. For its audit activities, CMS’s audit SOPs contain specific criteria for assessing compliance with requirements in defined areas that trigger the agency to require MA organizations to develop and implement CAPs for identified deficiencies. CMS’s MA organization account management SOPs and its audit SOPs provide guidelines for when an enforcement action may be appropriate: all compliance actions have been exhausted, the MA organization has a repeat deficiency, an area of noncompliance could result in harm to one or more Medicare an area of noncompliance is deemed as a “substantial failure” of Medicare requirements. Unlike compliance actions, criteria for enforcement actions are derived from federal statute and regulations. Enforcement actions are expressly provided for in federal statute as a remedy for certain violations. CMS officials told us that the agency initiates enforcement decisions when particular instances of non-compliance or failures to correct deficiencies warrant a higher level of intervention. According to CMS officials, they review various sources of evidence in determining whether to initiate an enforcement action, including: beneficiary complaints; results of surveillance activities, such as secret shopper observations; problems self- reported by the MA organization; data from audits; reporting requirements; and information from DOIs and SHIPs. In addition, when making enforcement decisions, CMS officials said that they consider the nature, scope, and severity of the particular non-compliance, how many beneficiaries have been or potentially could be adversely affected by the noncompliance, whether the MA organization has failed to address a serious compliance deficiency for which it has received prior notice and opportunity to correct, whether the compliance deficiency has been previously corrected but recurred, and CMS precedent in taking enforcement actions in similar circumstances. However, CMS officials said they prefer to resolve cases through lower levels of intervention given the resources required to investigate cases and the potential disruption to beneficiaries. Other contributors to this report include Christine Brudevold, Assistant Director; Shana Deitch; Gregory Giusto; Kevin Milne; Elizabeth T. Morrison; Michael Rose; and Hemi Tewarson. Medicare: Callers Can Access 1-800-Medicare Services, but Responsibility within CMS for Limited English Proficiency Plan Unclear. GAO-09-104. Washington, D.C.: December 29, 2008. Medicare: HCFA Needs to Take Stronger Actions Against HMOs Violating Federal Standards. GAO/HRD-92-11. Washington, D.C.: November 12, 1991. Medicare: Increased HMO Oversight Could Improve Quality and Access to Care. GAO/HEHS-95-155. Washington, D.C.: August 3, 1995. Medicare: Experience Shows Ways to Improve Oversight of Health Maintenance Organizations. GAO/HRD-88-73. Washington, D.C: August 17, 1988. Medicare Advantage: Characteristics, Financial Risks, and Disenrollment Rates of Beneficiaries in Private Fee-for-Service Plans. GAO-09-25. Washington, D.C.: December 15, 2008. Medicare Advantage: Increased Spending Relative to Medicare Fee- for-Service May Not Always Reduce Beneficiary Out-of-Pocket Costs. GAO-08-359. Washington, D.C.: February 22, 2008. Medicare Advantage Organizations: Actual Expenses and Profits Compared to Projections for 2006. GAO-09-132R. Washington, D.C.: December 8, 2008. Medicare Advantage Organizations: Actual Expenses and Profits Compared to Projections for 2005. GAO-08-827R. Washington, D.C.: June 24, 2008. Medicare Part D: Complaint Rates Are Declining, but Operational and Oversight Challenges Remain. GAO-08-719. Washington, D.C.: June 27, 2008. Medicare Part D: Some Plan Sponsors Have Not Completely Implemented Fraud and Abuse Programs, and CMS Oversight Has Been Limited. GAO-08-760. Washington, D.C.: July 21, 2008. Schedule and Timing Issues Complicate Withholding Premiums for Medicare Parts C and D from Social Security Payments. GAO-08-816R. Washington, D.C.: July 15, 2008. | Members of Congress and state agencies have raised questions about complaints that some Medicare Advantage (MA) organizations and their agents inappropriately marketed their health plans to Medicare beneficiaries. Inappropriate marketing may include activities such as providing inaccurate information about covered benefits and conducting prohibited marketing practices. The Centers for Medicare & Medicaid Services (CMS) is responsible for oversight of MA organizations and their plans. The Government Accountability Office (GAO) was asked to examine (1) the extent to which CMS has taken compliance and enforcement actions, (2) how CMS has helped beneficiaries affected by inappropriate marketing and the problems beneficiaries have encountered, and (3) information CMS has about the extent of inappropriate marketing. To do this work, GAO reviewed relevant laws and policies; analyzed Medicare data on beneficiary complaints, compliance actions and enforcement actions; and interviewed officials from CMS and selected state departments of insurance, state health insurance assistance programs, and MA organizations. CMS took compliance and enforcement actions for inappropriate marketing against at least 73 organizations that sponsored MA plans from January 2006 through February 2009. While the number of MA organizations varied during that time period, 192 MA organizations offered MA plans as of March 2009. Actions taken ranged from initial notices of noncompliance and warning letters to more punitive measures, such as civil money penalties and suspensions of marketing and enrollment. Nineteen of the 73 MA organizations had multiple types of actions taken against them. CMS helped beneficiaries who experienced inappropriate marketing by providing special election periods (SEP) through which beneficiaries could disenroll from their MA plan and enroll in new coverage without waiting for the twice yearly regular enrollment periods. However, some beneficiaries experienced financial or access-to-care problems as a result of inappropriate marketing that could not be addressed by a SEP. Financial hardships occurred, for example, when beneficiaries disenrolled from their MA plans and the withholding of premiums from Social Security for their former MA plan was not stopped promptly. In other cases, beneficiaries did not realize they had been enrolled in an MA plan until they tried to access services. Some of these beneficiaries experienced disruption of their access to providers and medications because their providers did not participate in the MA plan. CMS has limited information about the number of beneficiaries who experienced inappropriate marketing. Some beneficiaries who experienced inappropriate marketing may have exercised their option to disenroll from their MA plans during regular enrollment periods and might not have notified CMS of the marketing problems they encountered. For example, about 21 percent of beneficiaries disenrolled during the regular enrollment periods in 2007 from one type of MA plan that CMS officials acknowledged had a high incidence of inappropriate marketing. However, CMS discontinued a survey after 2005 that collected information on reasons for disenrollment and could have provided important information about the extent to which the disenrollments were the result of inappropriate marketing. CMS officials said that they plan to reinstitute a survey on disenrollment reasons in late summer 2010. CMS also has limited information about the number of beneficiaries who experienced inappropriate marketing because it did not directly track the number of SEP disenrollments. CMS did estimate the number of SEPs it provided for inappropriate marketing, but its estimates were based on data that were unreliable. |
This section provides an overview of USPTO’s patent examination process, patent infringement litigation and challenges, and USPTO’s Enhanced Patent Quality Initiative. A list of our prior work related to patents and intellectual property is included at the end of this report. When USPTO receives a patent application, the agency assigns it to a division of patent examiners with relevant technology expertise called a technology center. There are 11 technology centers focusing on everything from biotechnology to mechanical inventions. After the application is assigned to a technology center, it is then assigned to an individual examiner who is responsible for the examination, or prosecution, of the application. Figure 1 shows the key steps in the patent prosecution process. The focus of patent examination is determining whether the invention in a patent application satisfies the statutory requirements for a patent, including that the invention be novel, useful, not obvious, and clearly described. Generally, prior patents, patent applications, or publications describing an invention, among other things, are known as prior art. During patent examination, the examiner, among other things, compares an application to the prior art to determine whether the invention is novel and not obvious. Finding prior art is the most time consuming part of patent examination, according to our report on prior art. Applicants are not required to search for prior art before submitting their application, although they are required to notify examiners of material prior art they know about. A patent application includes the “specification” and at least one “claim.” By statute, the specification is to contain “a written description of the invention, and of the manner and process of making it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains to make and use the invention.” The law requires further that the specification “shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor regards as the invention. A patent application’s claims define the legal boundaries of the invention for which patent protection is sought. USPTO’s data show that, as of April 2016, the average time between filing an application and an examiner’s initial decision on the application was about 16 months, and it takes an average of 26 months after an application is submitted for USPTO to complete the examination of an application. Due to the current inventory of applications awaiting examination, USPTO is not able to begin examining patent applications upon receiving them—as of April 2016, USPTO had a backlog of about 550,000 unexamined applications. Through the initial decision, or first action on the merits, examiners initially notify applicants about the patentability of their inventions. To avoid lengthy back and forth exchanges, the examiner is encouraged to identify all of the problems with the application under the patent statutes and rules during the first “office action”—a USPTO policy called “compact prosecution.” The applicant is able to respond to the examiner after the first office action, and interviews are often used so that the examiner and applicant can clarify their respective positions, as well as the scope of the claimed invention. Ultimately, and sometimes after many months, the examiner then issues a final rejection or allows the claims in the application. If applicants receive a final rejection, they may file a request for continued examination, which requires a new submission and the payment of additional fees, and the examiner will continue examining the application. There is no limit to the number of times an applicant may request continued examination for an application, but USPTO officials say it is rare to have three or more requests for continuing examination filed for a single application. Examination ends with an issued patent, or when the applicant abandons the application—there are no terminal rejections. There are often a dozen or more claims per patent, and they can often be difficult for a layperson to understand, according to legal researchers. For example, one claim for a cardboard coffee cup insulator begins by referring to “a recyclable, insulating beverage container holder, comprising a corrugated tubular member comprising cellulosic material and at least a first opening therein for receiving and retaining a beverage container.” A patent application’s claims can be written broadly or be more narrowly defined, according to legal researchers, and applicants can change the wording of claims—which can affect their scope—based on examiner feedback during examination. Examiners can suggest that applicants amend their claims by adding words to their claims to address statutory impediments to issuance. For example, adding the word “corrugated” to modify “tubular member” in the example claim above narrows the scope of the claimed invention. Companies often prefer broader patent claims that make it less likely a competitor would be able to make a small change to its invention to avoid infringement. Patents are a property right and their claims define their boundaries. In some cases, patent claims define the scope of the invention by encompassing an entire function––like sending an e-mail––rather than the specific means of performing that function. While “functional claiming” is permitted by statute, we reported in 2013 that patents that include functional claiming language were more likely to be unclear and to be disputed in court. For example, if the pencil was patented as a “mechanism for writing,” the owner of the patent could theoretically sue manufacturers of different technologies for infringement, including pens and markers. As of May 2015, USPTO had nearly 8,300 patent examiners across the eight technology centers that we reviewed. The agency uses the General Schedule (GS) classification system for patent examiners, whose levels range from GS-5 to GS-15. Examiners at the GS-14 level or above (44 percent of the examiners in the technology centers we reviewed) are referred to as “primary examiners” who may accept or reject a patent application without additional review. This level of authority is in contrast to junior examiners—most examiners below the GS-14 level— whose work must first be reviewed by a supervisory or primary patent examiner before it can be sent to the applicant. Examiners are rated based on their production, or the number of examination tasks they perform, among other factors. The number of examination tasks that an examiner is expected to perform is set based on the examiner’s technology area and experience level. USPTO allots more time to review applications to examiners that deal with more complex technologies. For example examiners working on artificial intelligence patent applications are given an average of about 31 hours to complete an examination, while those working on patent applications for exercise devices are given an average of about 17 hours. As examiners are promoted on the GS-scale, the average number of hours they are allotted to work on each application and the level of review from their supervisors declines. Primary examiners have the least amount of time to examine patent applications and the applications they review undergo the least amount of supervisory review, in part because their experience allows them to work more efficiently and effectively, according to USPTO officials. As examiners rise from junior to primary status, their examination time is roughly cut in half. According to USPTO officials, on average, examiners spend about 22 hours total on average on each application from start to final determination, with a low of about 11 hours on average for some primary examiners in the least complex technologies and a high of about 60 hours on average for an entry-level junior examiner in more complex technologies. A few studies have shown that there are differences in issued patents depending on how much time examiners are given to complete their examination. For example, an academic paper from 2012 found that more experienced examiners cite less prior art, are more likely to grant patents rather than reject them, and are more likely to grant patents without any preliminary rejections. Similarly, other researchers have found that examiners put less effort into searching for prior art when they are given less time to review an application. For example, the National Bureau of Economic Research published a paper in 2014 that found when examiners were allotted less time to conduct a patent examination, they were less likely to make time-intensive prior art rejections and more likely to grant a patent. When a patent right is not clearly defined, it can lead to boundary disputes, often in the form of infringement lawsuits. Although litigated patents are a small percentage of issued patents, low quality patents are more likely to be asserted in patent infringement lawsuits because, according to some economists, the less clear the claim boundaries are, the more likely that others will infringe the patent or will continue to infringe when confronted by the patent owner. Patent owners can bring infringement lawsuits against anyone who uses, makes, sells, offers to sell, or imports the patented invention without authorization. If the court finds that infringement has occurred, it must award the patent owner damages adequate to compensate for the infringement. During an infringement case, the accused infringer may seek to have the infringement lawsuit dismissed by showing, by clear and convincing evidence, that the patent at issue is invalid. However, because most of the roughly 4,000 patent infringement lawsuits filed each year settle before the court ever makes a determination of patent validity some patent owners asserting low quality patents may not be concerned with the risk the courts will invalidate their patents. When the courts do rule on validity, they generally invalidate almost half of the patents that are challenged, according to academic research. Accused infringers can also challenge a patent’s validity outside of an infringement lawsuit in administrative proceedings at USPTO’s Patent Trial and Appeal Board (PTAB). Patent challenges at PTAB are often initiated by individuals or firms that have been sued for infringing a particular patent, so the patents that appear in these proceedings are often the same ones that appear in patent infringement suits. The challengers in these proceedings seek to present evidence that shows that the patent claims should not have been granted because they failed to meet a statutory patentability requirement. The PTAB proceedings are a lower cost alternative to the federal courts where infringement suits are often very expensive. As of March 2016, there have been around 4,700 patent challenges filed with the PTAB since its inception in 2012 and about 60 percent of these challenged patents are related to computers and software. About 30 percent of the PTAB proceedings have reached a final decision, and nearly 75 percent of those final decisions have resulted in all of the challenged claims being held unpatentable. In February 2015, USPTO launched an Enhanced Patent Quality Initiative designed to improve the quality of patents. According to USPTO, it started its quality initiative because the agency had successfully reduced its backlog of patent applications, and had the financial resources to consider longer-term improvements to patent quality due to fee setting authority provided by the America Invents Act. As part of its initiative, the agency has taken a number of actions, including: Creating a new leadership role and Patent Quality Office: The agency created a new senior position for overseeing patent quality—the Deputy Commissioner for Patent Quality—in January 2015 to provide a dedicated focus on the agency’s patent quality efforts. Patent Quality Summit: USPTO held its first ever Patent Quality Summit in March 2015. The Summit was designed for the public, including internal and external stakeholders, to provide input to USPTO about patent quality, specifically how the agency could guarantee the most efficient process to review applications and to ensure the issuance of the highest quality patents, according to the USPTO Summit website. According to USPTO, the agency received 1,206 ideas for improvement for patent quality from all sources, including the Patent Quality Summit and examiner forums. Evolving programs of the Enhanced Patent Quality Initiative: USPTO established 11 initiatives based on feedback from internal and external stakeholders to help achieve its goals to enhance patent quality, according to its Enhanced Patent Quality Initiative website. These initiatives are in various stages, ranging from early development to having been completed, according to a senior USPTO official. Some of the evolving programs include: Clarity of the Record Pilot Program: USPTO began this pilot in February 2016 with about 130 examiners, and the pilot is expected to end in August 2016. This pilot seeks to develop best practices for examiners to enhance the clarity of all aspects of the prosecution record, and to study the effect of implementing these best practices during examination. Clarity and Correctness Data Capture (Master Review Form): This effort is expected to replace USPTO’s current quality assurance and supervisory approaches to reviewing examiners’ work and will allow the agency, for the first time, to collect consistent data across all the reviews. Quality assurance officials also told us that using the form would allow their office to have data on 50 percent more reviews this year than in the past, resulting in a total of about 12,000 reviews of examiners’ work this year compared with about 8,000 reviews per year in the past. USPTO expects to implement the form agency-wide in fiscal year 2017. In addition to the actions taken under its Enhanced Patent Quality Initiative, the USPTO’s Office of Patent Training develops and implements training for examiners on a variety of topics, with a focus on legal and policy matters. For example, the USPTO recently provided examiners with training on functional claiming. The number of federal district court filings of new patent infringement lawsuits has generally increased between 2007 and 2015, from more than 2,000 suits in 2007 to more than 5,000 suits in 2015 (see fig. 2). Because lawsuits can include multiple defendants, we also analyzed data on federal district court filings at the defendant level to account for lawsuits in which a patent was asserted against multiple defendants. Looking at the data in this way, we found that the number of defendants in new patent infringement suits filed in federal district courts increased from about 5,000 defendants in 2007 to more than 8,000 defendants in 2015, as shown in figure 2. According to some stakeholders with whom we spoke, the decreases in litigation that occurred in 2014, both in the number of suits and defendants, were likely due to a key Supreme Court decision in that year, Alice Corp. Pty. Ltd. v. CLS Bank Int’l. The Court in the Alice decision held that where a patent claim is based on an abstract idea, which is not patentable, merely using generic computer implementation does not transform that idea into a patent-eligible invention. This decision has limited the validity of what some stakeholders considered to be overly broad and low quality patents thus preventing them from being used to file infringement lawsuits. In addition, the ability of potential defendants in patent infringement suits to file inter partes challenges to the validity of a patent at the PTAB beginning in 2012 may have made some patent owners reluctant to bring infringement suits, which could have contributed to the decline in the number of suits according to some stakeholders we spoke with. Most patent infringement suits are filed in just a few of the 94 federal district courts, and these suits may generally be brought in any district in the country where the allegedly infringing products are sold. Patent infringement suits are increasingly being filed in the predominantly rural Eastern District of Texas (see fig. 3). In 2007, about 20 percent of all patent infringement defendants were named in cases filed in the Eastern District of Texas, and this percentage increased to almost 50 percent in 2015. Historically, the Eastern District of Texas has been attractive to patent owners filing infringement lawsuits because of the speed at which suits moved to trial, and the perception of plaintiff-friendly juries, as we found in 2013. In addition, according to one published paper we reviewed, judges in the Eastern District of Texas have implemented a number of court rules and practices to attract patent infringement suits to their district. For example, according to the paper and to some stakeholders we interviewed, judges in the Eastern District of Texas have not been suspending infringement lawsuits pending patent validity challenges at the PTAB, as some had anticipated. In addition, most recently, according to some stakeholders, some judges in Eastern Texas have been reluctant to dismiss infringement allegations at early stages of the litigation based on the Alice decision that calls into question the validity of certain types of software-related patents. Our analysis shows a sharp increase from 2014 to 2015 in the number of defendants that were accused of infringing software-related patents in suits filed in Eastern Texas, which have averaged about 85 percent of the defendants in that district each year between 2007 and 2015. None of the other top six district courts saw any large increases in patent infringement suits in 2015, including suits involving patents related to software or otherwise. This indicates that the increase in suits in Eastern Texas drove the overall increase in suits filed nationwide in 2015. The number of defendants in patent infringement litigation was relatively stable between 2007 and 2015, with the exception of lawsuits involving patents for computers and communications technologies (see fig. 4). This area includes technologies related to computer hardware and software. The percentage of defendants in lawsuits involving patents in this technology area increased from 38 percent in 2007 to 62 percent in 2015. Patents related to computer and communications technologies are easier to unintentionally infringe because they are more likely to be unclear and overly broad, according to some stakeholders we interviewed and some of the published research we reviewed. In addition, according to a 2003 Federal Trade Commission study, in industries such as computers and communications, firms need to avoid infringing dozens, hundreds, or even thousands of patents to produce just one commercial product. Therefore, it is particularly challenging to develop innovative new products where there are thousands of interrelated patents covering similar technologies making it nearly impossible to avoid infringement, according to some stakeholders we interviewed. These stakeholders also noted that unclear and overly broad patents—an indicator of lower quality—can harm innovation regardless of whether the patent owner files an infringement lawsuit because just the threat of an infringement suit can deter the development of new products. Further, the majority of defendants in patent infringement suits were involved in suits with software-related patents each year from 2009 through 2015 (see fig. 5). The number of defendants involved in infringement lawsuits where software-related patents were asserted generally increased through 2013, then dropped in 2014. As with computer and communications technologies, some stakeholders told us that software-related patents are easier to infringe because they also often have overly broad claims, an indicator of low quality patents. USPTO has taken actions to address patent quality—most notably through its Enhanced Patent Quality Initiative—but there are additional opportunities for USPTO to improve patent quality. Specifically, USPTO does not have a consistent definition of patent quality and has not fully developed specific performance measures to assess whether its efforts are affecting patent quality. Further, USPTO has not fully assessed whether the time it allots for examination and the monetary incentives it gives examiners for completing patent examinations faster affect patent quality. In addition, USPTO has limited data available on PTAB decisions. Finally, USPTO has not fully evaluated the effects of other policies and procedures on patent quality. USPTO does not currently have a consistent definition for patent quality, which may limit its ability to assess the effects of its examination policies and review processes—as well as its Enhanced Patent Quality Initiative— on patent quality. Several high level USPTO officials and the four supervisory patent examiners that we interviewed told us there is no consistent definition of patent quality at USPTO to guide the agency in its overall operations. One examiner wrote in our open-ended survey response that the USPTO appears to have no definition of patent quality and that without a working definition, management’s focus on patent quality is meaningless. Most of the stakeholders that we spoke with— including former high ranking USPTO officials, academics, and non- governmental organizations—said that it is important for USPTO to develop a consistent definition of patent quality. According to USPTO officials, one challenge in developing a consistent definition is that the patent community holds varying definitions of patent quality. For example, patent attorneys who defend companies against patent infringement generally tend to favor clearly defined patents that are easily understood in patent disputes, while patent owners tend to prefer less well defined patents that may offer broader coverage for their invention. USPTO officials offered a variety of definitions of patent quality. Several officials focused on validity, or patentability (i.e., meeting statutory requirements) and clarity, although we could not find this in agency guidance or documents as USPTO’s definition. Some officials also included aspects of patent examination, such as an application that requires a limited amount of time for review by examiners in their definition. While USPTO has found it difficult to clearly define patent quality, most of the stakeholders we spoke with told us that they would define patent quality as patent validity—that is, a quality patent would meet all the statutory requirements for patentability and would be upheld if challenged in a lawsuit or PTAB proceeding. According to federal standards for internal control, one important internal control activity is that management has established clear, consistent agency objectives that allow the agency to assess the risks the agency faces from external and internal sources. However, USPTO’s objectives may not be clear because it does not have a consistent definition of patent quality. Four supervisory patent examiners we interviewed told us that without a consistent definition of patent quality, USPTO is unable to standardize practices to improve patent quality. As a result, it is hard for USPTO to define, measure, and work toward quality goals, according to these supervisors. Moreover, without a definition for patent quality, USPTO is at risk of having agency officials work inconsistently or at cross-purposes in their attempts to fulfill the call to improve patent quality, based on individual understandings of what patent quality means to each staff person. Further, although USPTO is taking steps to improve patent quality metrics as part of the Enhanced Patent Quality Initiative, it has not established specific goals or performance measures related to its strategic goal to optimize patent quality and timeliness, which may limit its ability to assess potential effects of its efforts on patent quality. USPTO’s 2014-2018 strategic plan includes the goal to “optimize patent quality and timeliness,” but the patent quality objective does not include specific performance measures that fully assess progress towards the goal. For example, USPTO names seven objectives to achieve this goal, but six of the seven objectives focus on timeliness, customer service, and process or production goals rather than patent quality. For the one patent quality objective, USPTO cites improving patent quality data and maximizing the use of such data as two of its four performance measures (see fig. 6). The Government Performance and Results Modernization Act of 2010 (GPRAMA) requires, among other things, that agencies establish objective, quantifiable, and measurable performance goals, and establish performance indicators to measure progress toward each performance goal. Although USPTO is not required to comply with GPRAMA, we have previously reported the practices established by the law, such as establishing goals and performance indicators, can serve as a leading practice for organizational performance management at lower levels within federal agencies, such as individual programs or initiatives. However, USPTO has not established such goals and indicators because the office uses general terms to describe these two performance measures and does not include measurable, quantifiable goals or performance indicators on how the agency plans to define or collect patent quality data or how, specifically, it will maximize use of such data. Without such goals and indicators, USPTO cannot determine whether it is meeting its goal of enhancing patent quality. USPTO has not fully assessed the time allotted for patent examinations or its monetary incentive system, which may be at odds with patent quality. Patent examiners are rated annually on their production and docket management, among other elements. USPTO provides examiners with monetary incentives, or bonuses, for timeliness and production, but does not offer a bonus for producing high-quality work. Three USPTO officials told us that there are trade-offs between timeliness and patent quality, explaining that examiners cannot examine patents quickly and, at the same time, grant patents that are of the highest quality. One of these officials told us that the office’s focus on timeliness currently trumps high quality work at the agency, potentially increasing the tension between the goals of timeliness and quality. Time allotments and incentives can lead to pressure for examiners to complete their work quickly. Most of the stakeholders we spoke with told us that examiners’ time pressures are one of the central challenges for patent quality. The results of our survey of patent examiners confirm that examiners are experiencing time pressures. Specifically, on the basis of our survey, we estimate that, given a typical workload, about 70 percent of examiners have less time than needed to complete a thorough examination. In addition, we estimate that more than 70 percent of examiners worked voluntary or uncompensated overtime in the past 6 months to meet their minimum production goals (see fig. 7). Further, incentives appear to motivate examiners to complete their work quickly. For example, on the basis of our survey, we estimate that for 93 percent of examiners, receiving bonuses for achieving production goals motivates them to go above and beyond their base level of performance. Additionally, on the basis of our survey, we estimate that nearly 70 percent of examiners experience pressure to avoid time-consuming office actions. In addition, a few examiners we interviewed said that the system as currently designed incentivizes an examiner to issue a patent instead of issuing a final rejection, suggesting that when pressed for time examiners tend toward granting patents. For example, USPTO rules and procedures do not restrict the number of requests for continued examination an applicant may file, so issuing the patent is the only means USPTO has to end the examination process. This creates an environment where patents may be granted that do not fully meet patentability standards. In a 2015 study, USPTO economists found that patents examined by primary examiners had 26 percent better odds of appearing in a patent infringement lawsuit compared to similar patents examined by junior examiners, which suggests that primary examiners, despite their experience, may not have adequate time to ensure that the patents they issue are always high quality. USPTO officials acknowledged that this difference was likely due to the additional time and supervisory review that junior examiners receive. The precise effects of the time allotted for examinations and incentives on quality are unclear because USPTO has not fully analyzed the effects of current time allotments or incentives on an examiner’s ability to perform a thorough examination. Since creating the time allotments in the 1970s, USPTO adjusted the time allotted to examiners between fiscal years 2010 and 2012 and gave all patent examiners a total of 2.5 additional hours per application. According to federal standards for internal control, agencies should provide staff with the right structure, incentives, and responsibilities to make operational success possible. The right incentives and structure allow staff to be aligned with the agency’s objectives. Without analyzing the time and incentives needed for examiners to complete thorough examinations, USPTO cannot be assured that its current time allotments and incentives support the agency’s goal to optimize patent quality. The PTAB has been in operation since 2012 with hundreds of decisions made to date on challenges to existing patents; however, statistics kept by PTAB staff about the results of PTAB decisions are limited. For example, the data available as of March 2016, did not specify which precise claims in a patent were found to be unpatentable and why, or which sources of prior art were used in the proceeding, both of which are key data fields for potential analysis. Moreover, the data have not been widely shared within USPTO. According to a PTAB official familiar with the data, only a few USPTO staff in the Office of Patent Legal Administration have asked for and received the data that PTAB staff have compiled to date. Information on the results of the PTAB proceedings is not regularly provided to USPTO managers and supervisory patent examiners in the Technology Centers, according to PTAB staff. According to federal standards for internal control, information from internal and external sources should be obtained and provided to management as a part of the agency’s reporting on operational performance relative to established objectives. In addition, pertinent information should be identified, captured, and distributed in a form and time frame that permits people to perform their duties efficiently. However, USPTO managers and staff in the technology centers do not have data on trends in the outcomes of PTAB’s post-grant proceedings because USPTO has not systematically pulled information from PTAB decisions, has not widely shared PTAB information within the agency, and has not analyzed the data for trends in patent invalidations to identify whether additional training, guidance, or other actions are needed to address issues related to quality. Without a process for USPTO managers and staff to readily access the reasoning underlying PTAB’s invalidation of patents or information on possible trends that may exist with those problem patents, USPTO may be overlooking critical information on problems the PTAB has found with patents issued by the agency. As a result, USPTO may be missing opportunities for using this information to make decisions that could help to improve patent quality, such as improving guidance or providing additional training on potential patent quality issues, and USPTO’s examiners are unable to learn lessons from the PTAB decisions. USPTO has not fully evaluated the effects of other policies and procedures on patent quality, which may also affect their ability to issue high quality patents. Through interviews with USPTO officials and stakeholders and our examiner survey, we identified several policies and procedures that could affect patent quality: Compact prosecution: This USPTO policy encourages examiners to complete an examination within two office actions and to address all statutory issues of an application in the first office action. One examiner commented in our survey that compact prosecution compels examiners to guess what unclear claims mean in order to search for prior art. Further, based on our survey, we estimate that a change in policy that would not require examiners to address every issue in the first office action would help about half of USPTO examiners perform examinations more effectively. In addition, some stakeholders raised concerns about the effect of compact prosecution on patent quality, with one stakeholder emphasizing that the policy does not work well in an environment where patent applications are increasingly complex. Unlimited number of patent claims: Applicants are allowed to include any number of claims in a patent application. According to USPTO officials, applicants’ ability to file unlimited claims can have a negative effect on quality, because it is more difficult for examiners to fully review an application with numerous claims in the time allotted; this was also supported by our survey results. The 2015 study by USPTO economists also supported this finding; it found that patents that had more claims had better odds of showing up in patent infringement lawsuits. Requests for continued examination: Applicants are currently allowed to file an unlimited number of requests for continued examination, which is a request by an applicant to reopen examination of the patent application after the prosecution of the application has been closed. Applicants request continued examination most often after final rejection of an application, according to USPTO officials. Such requests provide applicants with virtually unlimited attempts to secure a patent, which is problematic for patent quality, according to some stakeholders. Some stakeholders also told us that such unlimited requests can wear down examiners, making them more likely to eventually grant the patent. While USPTO has reached out to patent applicants to learn why they were using such continued examinations, the number of requests for continued examination continues to pose a burden for the agency’s examiners. Federal standards for internal control direct agencies to comprehensively identify risks and consider all significant interactions between the entity and other parties. Once these risks have been identified, they should be evaluated for their potential effects, including the significance of the risks and the likelihood of their occurrence. USPTO officials acknowledged that some of the agency policies discussed above could affect patent quality; however, officials did not know the extent of the effects because USPTO has not done an evaluation to determine the potential effects. Without evaluating the effects of these policies on patent quality, USPTO is at risk of continuing practices that may adversely affect patent quality. Additionally, USPTO policies and procedures generally require clarity in issued patents. On the basis of our survey, we estimate that additional claim clarity requirements to applicants would help more than 80 percent of examiners do their jobs more effectively. We identified two areas where USPTO has encountered challenges with claim clarity in patent applications that can affect patent quality: (1) unclear terms, and (2) unclear or broadly worded claims including the use of functional claim language. First, patent applications that include unclear terms contribute to challenges with patent quality. On the basis of our survey, we estimate that 45 percent of examiners always or often encounter terms that are not well defined in the patent applications’ specifications. Most of the stakeholders we interviewed—including legal scholars and former high- ranking USPTO officials—as well as four supervisory patent examiners and the majority of examiners responding to our survey—indicated that requiring applicants to provide a glossary and define their terms would help to improve patent quality. Further, USPTO officials said that examiners and applicants that participated in USPTO’s Glossary Pilot Program generally indicated benefits to including a glossary and that the glossary improved claim clarity. Second, patent applications that include unclear or broadly worded claims, including those that use functional claim language, contribute to challenges with patent quality. For example, on the basis of our survey, we estimate that nearly 90 percent of examiners always or often encountered broadly worded claims in applications they reviewed, and for nearly two-thirds of examiners, applications with broadly worded claims make completing a thorough examination more difficult. In addition, dealing with functional claims, especially for software-related patents, can be time consuming and difficult especially if examiners are not aware of the applicant’s intent to use functional claiming language under section 112, according to a few examiners we interviewed. On the basis of our survey, we estimate that more than 40 percent of examiners experience pressure to avoid making rejections that relate to claim clarity (section 112). Also, according to our survey data, we estimate that for a smaller percentage of patent examiners (41%) correctly applying patentability standards for dealing with functional claims (section 112(f)) was very important compared to the percentage who reported adhering to novelty and nonobviousness standards (sections 102 and 103) was very important (81%). In a 2015 study, the USPTO economists found that patents with broader claims—patents with fewer words in their claims— had better odds of appearing in patent infringement lawsuits than similar patents. Further, that analysis found that patents that contained functional claim language had 40 percent better odds of appearing in patent infringement lawsuits than similar patents that did not contain this language. Some stakeholders we interviewed said requiring “claim charts” in patent applications could help improve patent quality. Claim charts use one column to present the claim and another column to present the limitations and boundaries of that claim (see table 1). These charts, which are commonly used in the federal courts and in PTAB’s post-grant proceedings, provide additional information on the boundaries of a patent claim. Similarly, according to some stakeholders, having applicants include a functional claim check box to indicate whether they were using functional claim language under Section 112(f) could help to improve patent quality. If the box is checked it indicates that the examiner should make sure that the functional claim is supported in the patent specification. By statute, an application for a patent must particularly point out and distinctly claim the subject matter of the invention. USPTO regulations require that the application include a description of the process of making and using the invention in such full, clear, concise, and exact terms as to enable a person skilled in the art to use and make the invention. However, examiners continue to encounter problems with patent application clarity because USPTO does not specifically require patent applicants to clearly define the terms used in their applications, provide additional means to clearly describe claim boundaries, or clearly identify when they are using functional claiming language. Without making use of tools to improve the clarity of patent applications, such as by having applicants include a glossary to define the terms used in the application, provide a claim chart, or indicate the use of functional claims through a checkbox, the agency is at risk of issuing unclear patents that may not comply with statutory requirements. The U.S. patent system plays a vital role in our nation’s economy by promoting innovation and supporting millions of jobs in innovation-rich sectors. However, the quality of patents issued by the USPTO has come under scrutiny in recent years, and our analysis suggests that some agency policies and practices may negatively affect the quality of the patents USPTO awards. It has taken several actions to elevate the importance of patent quality within the agency, but additional opportunities exist to improve the effectiveness of the agency’s patent quality efforts. USPTO does not have a consistent definition of patent quality that is clearly articulated in agency guidance or fully developed measurable goals and performance indicators to guide and evaluate work towards the agency’s quality goals. Without a consistent definition of patent quality, USPTO is at risk of having its staff work at cross purposes to improve patent quality based on their individual definitions of patent quality. Further, without improvements to measurable goals and performance indicators, USPTO is at risk of not being able to fully measure and capture key performance data on whether the agency is meeting its strategic goal to optimize patent quality. USPTO’s policies regarding the time allotted to complete patent application reviews and monetary incentives that are based on the quantity of the work examiners complete, not the quality of their work may negatively affect the quality of issued patents. However, USPTO has not analyzed its policies regarding examiner performance incentives and has assessed the time allotted for patent examination only minimally since the 1970s. Without analyzing whether time allotments to complete a thorough examination are sufficient, USPTO is at risk of issuing lower quality patents due to examiners’ not having enough time to complete their work. Our report on prior art searching recommended that USPTO reassess the time allotted to perform a thorough prior art search. Without analyzing the current incentive structure, USPTO cannot ensure that its incentives are aligned with high-quality work. Further, USPTO has limited data available on decisions from the PTAB to assess its patent quality initiatives. The PTAB has reviewed patents and invalidated some, or all, of the claims for hundreds of patents issued by the USPTO. However, the PTAB data and analysis of these decisions are limited in specificity as well as in distribution. Without identifying all of the data fields that would be useful to track, establishing a means for USPTO managers and examiners to easily access information—including information such as why claims were found to be unpatentable—and analyzing the data to identify potential trends, USPTO officials may be overlooking key information that could help them provide additional training or guidance or take other actions to address recurring issues. Further, other USPTO policies and procedures—namely, compact prosecution, unlimited numbers of patent claims, and requests for continued examination—may affect patent quality, but the agency has not evaluated the effects of these policies. Without evaluating the possible effects of these policies on patent quality, USPTO is at risk of continuing practices that could potentially affect patent quality. Finally, without requiring greater clarity in applications—through the use of glossaries, check boxes to signal functional claiming language, or claim charts—USPTO is at risk of issuing patents that are overly broad and not clearly worded—and may not comply with statutory requirements— thereby increasing the likelihood that the patent becomes the subject of litigation. We recommend that the Secretary of Commerce direct the Director of the USPTO to take the following seven actions to help improve patent quality: Develop a consistent definition of patent quality, and clearly articulate this definition in agency documents and other guidance. Further develop measurable, quantifiable goals and performance indicators related to patent quality as part of the agency’s strategic plan. Analyze the time examiners need to perform a thorough patent examination. This action could be taken in conjunction with the recommendation in our report on USPTO’s prior art search capabilities (GAO-16-479). Analyze how current performance incentives affect the extent to which examiners perform thorough examinations of patent applications. Establish a process to provide data on the results of the PTAB proceedings to managers and staff in the USPTO’s Technology Centers, and analyze PTAB data for trends in patent quality issues to identify whether additional training, guidance, or other actions are needed to address trends. Evaluate the effects of compact prosecution and other agency application and examination policies on patent quality. In doing so, USPTO should determine if any changes are needed to ensure that the policies are not adversely affecting patent quality. Consider whether to require patent applicants to include claim clarity tools—such as a glossary of terms, a check box to signal functional claim language, or claim charts—in each patent application. We provided a copy of our draft report to USPTO for review and comment. In its written comments, which are reproduced in appendix III, USPTO generally agreed with our findings, concurred with our recommendations, and provided information on steps officials plan to take to implement the recommendations. USPTO also provided additional technical comments, which we incorporated, as appropriate. USPTO stated in its response to the first recommendation that USPTO already has a consistent definition for patent quality, specifically that a quality patent is one that is correctly issued in compliance with all of the requirements of Title 35 as well as relevant case law at the time of issuance, which is consistent with how we define the term for this report. However, in our audit work, we did not find evidence that this definition was clearly articulated in agency documents and guidance or used in its performance indicators and goals. We revised this recommendation to clarify that USPTO should not only define the term, but also make this definition clear in relevant documents. In response to our second recommendation, USPTO said that it has taken some steps to update and improve its performance indicators and goals related to patent quality. In its technical comments, USPTO suggested that we revise the report to recommend that USPTO further develop its goals and performance indicators. We agree with this suggestion and made the change. As USPTO further develops its goals and performance indicators, we encourage the agency to more clearly link these goals and indicators to its definition of patent quality. In response to our seventh recommendation, USPTO said that, contrary to the draft report’s findings, USPTO’s initial conclusion was that a glossary did not make a meaningful difference in quality during the prosecution of an application, though the USPTO is still analyzing whether the use of a glossary has a long-term impact on a patent. In response to this comment, we revised the statement in the report to more closely align with the information that USPTO officials presented at its March 8, 2016, Patent Quality Chat on the issue. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Commerce, the Director of the USPTO, the Commissioner for Patents, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To examine recent trends in patent infringement litigation, we obtained patent infringement litigation data from two companies—RPX Inc. and Lex Machina—that included all of the patent infringement lawsuits filed in all 94 federal district courts between 2007 and 2015. These data included information on the patents asserted in each suit, the defendants involved, and the federal district court where the suit was filed. We conducted data quality testing on the data from RPX and Lex Machina to look for missing or out of range values, interviewed relevant officials, and reviewed relevant documentation for the data and found the data to be sufficiently reliable to determine recent trends in patent infringement lawsuits. We also conducted 11 semi-structured interviews with stakeholders from technology companies, venture capital investors, and others knowledgeable about recent patent infringement litigation. In addition, we reviewed published literature on the patent system, including reports from patent researchers, the Federal Trade Commission, and the Congressional Budget Office. To examine what additional opportunities exist, if any, to improve patent quality, we reviewed relevant laws and USPTO documents and interviewed USPTO officials and representatives of the examiners’ union—the Patent Office Professional Association. We interviewed four supervisory patent examiners and six patent examiners from a variety of technology areas. We also conducted 11 semi-structured interviews with patent stakeholders who were knowledgeable about patent quality and USPTO, including legal scholars, former high-ranking USPTO officials, and representatives from public interest non-governmental organizations. Two of the 11 stakeholders we interviewed are currently serving in leadership roles as board members of the American Intellectual Property Law Association. In assessing USPTO’s efforts, we identified criteria in the federal standards for internal control and USPTO’s strategic plan. In addition, we conducted a web-based survey of a stratified random sample of 3,336 eligible USPTO patent examiners from across 8 of the 11 the technology-based subject matter groups (referred to as technology centers) into which USPTO examiners are divided. Fielded between August and November 2015, the survey was designed to collect information on USPTO’s approach to patent quality and how USPTO might improve its patent quality efforts. To identify our survey population, we obtained from USPTO a list of patent examiners as of May 2015. We excluded examiners from three technology centers, as follows: We excluded the Designs technology center because these examiners work on design patents instead of utility patents; design patents are outside the scope of this engagement and have different statutory and administrative requirements than utility patents. We excluded examiners who perform “re-examination” work and not initial patent examination. We excluded examiners in the patent training academy because these examiners are recent hires who are in a 12-month training program. We also excluded examiners employed at USPTO for less than one year. We then defined nine strata by technology center, with one technology center separated into two strata, as described in table 2. Specifically, the Transportation, Construction, Electronic Commerce, Agriculture, National Security and License & Review technology center includes a diverse set of technologies, including transportation, construction, agriculture, and business methods. In our review, we separated the art units—subunits of a technology center—focused on electronic commerce and business methods (collectively referred to as business methods) in light of recent legislation and court decisions related to business methods. This resulted in 9 strata with a target survey population totaling 7,825 eligible examiners. From this list, we drew our stratified random sample of 3,336 eligible USPTO patent examiners. We received responses from 2,669 eligible examiners for an 80 percent response rate. Because we used a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we quantified the sampling error and express our confidence in the precision of our particular sample’s results at a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. We designed our sample to provide percentage estimates with 95 percent confidence intervals that are within upper and lower bounds of 5 percentage points, within each stratum. We oversampled based on an expected response rate of 70 percent; however, because we achieved a higher than expected response rate, the upper and lower bounds for survey results within each stratum are generally less than 5 percentage points. The only estimates for which the upper and lower bounds exceed 5 percentage points are certain results for the business methods stratum. In these instances, the upper and lower bounds are between 5 and 6 percentage points. In this report, our figures containing survey results show the upper and lower bounds for results at the 95 percent confidence interval. For other estimates in the report, we have not provided the upper and lower bounds in the text or tables; however, those details for all survey results are available in the e- supplement related to this report, GAO-16-478SP. The quality of survey data can also be affected by nonsampling error, which includes, for example, variations in how respondents interpret questions, respondents’ willingness to offer accurate responses, nonresponse errors, and data collection and processing errors. To minimize nonsampling error, we took several steps in developing the survey and in collecting and analyzing survey data. Specifically, in developing the survey, we worked with our survey professionals to, among other things, draft questions that were clear and unbiased. We pre-tested the survey in person with five USPTO staff: three examiners who are also representatives to the examiners’ union, a supervisory patent examiner, and a quality assurance specialist. We used these pre- tests to check that the questions were clear and unambiguous, used correct terminology, requested information that could be feasibly obtained, and were comprehensive and unbiased. We also obtained comments on the survey from USPTO management and leadership from the examiners’ union. In addition, we obtained a quality review by a separate GAO survey methodologist. Based on these activities, we made changes to the survey before administering it. Further, using a web-based survey and allowing examiners to enter their responses into an electronic instrument created an automatic record for each respondent. This eliminated the potential for errors that could have resulted if we had used a manual process to enter respondents’ data from paper surveys. In addition, to account for the complex sample design, we used survey software in our analyses to produce appropriate estimates and confidence intervals, and the programs we used to process and analyze the survey data were independently verified to ensure the accuracy of this work. To minimize nonresponse error, we made a variety of contacts with the sample of examiners during the survey, including follow-up e-mails to encourage responses. In addition, between October 20 and 23, 2015, we attempted to follow-up via telephone calls to all 1,102 examiners who had neither completed the survey nor told us that they were no longer examiners. We also analyzed nonresponse bias to (1) assess whether any factors were associated with examiners’ propensity to respond and (2) to allow our analysis of respondents to properly reflect the sampling universe of eligible examiners. To adjust the sampling weight for potential nonresponse bias, we used standard weighting class adjustments based on the sampling strata and the examiners’ years of experience at USPTO. In this report and in the related e-supplement at GAO-16-478SP, we present the survey results using the nonresponse adjusted weights, which are generalizable to the eligible population of examiners. We analyzed the responses to the survey for all examiners, as well as responses by technology center and by the General Schedule (GS) level of the examiners. We selected three categories of GS levels—less than GS-13, GS-13, and greater than GS-13—because examiners at these levels have different responsibilities and authorities when examining patent applications. Specifically, examiners above the GS-13 level may grant a patent or reject a patent application without additional review; examiners below the GS-13 level must have their actions reviewed and signed by a more senior examiner; and some GS-13 examiners are transitioning from one GS level to the other. For some other survey questions, we also reviewed examiners’ open- ended responses on selected topics. We selected those topics based on our interviews with experts and USPTO officials as well as our analysis of closed-ended survey responses. We selected the questions for which examiners’ responses most frequently included keywords we identified for each topic. An analyst conducted a keyword search of all responses to the selected open-ended questions and coded responses containing the keywords. A second analyst verified the initial analyst’s coding. Our report provides some examples of examiners’ comments based on this review. Examiners’ responses to open-ended questions are not generalizable to other examiners. In addition, because we did not conduct a systematic review of all open-ended responses to our survey, we do not report the exact number of examiners who provided responses on the topics we reviewed. In addition, we conducted statistical tests of association on the results of certain survey questions; all tests were independently verified to ensure their accuracy. All tests of association were carried out at the 5 percent level of significance and were Cochran-Mantel-Haenszel (CMH) Chi- square tests of general association. The testing was carried out in SUDAAN, which is statistical software appropriate for the analysis of survey data. The null hypothesis was that there is no association between the two tested variables. When the association between two variables, conditional on a third variable, is of interest, this relationship is referred to as the stratum-adjusted CMH test. The test statistic is Wald Chi-Square. We conducted this performance audit from November 2014 to June 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Patent Examination Process at the U.S. Patent and Trademark Office (USPTO) (Corresponds to Fig. 1) This appendix provides details on steps in the patent examination process, including rollover information, depicted in figure 1. In addition to the contact named above, the following individuals made contributions to this report: Hilary Benedict (Assistant Director), Krista Breen Anderson, Richard Burkard, John Delicath, Cindy Gilbert, Shilpa Grover, Rob Letzler, Rebecca Makar, Rob Marek, Chris Murray, Eleni Orphanides, Shep Ryen, Kelly Rubin, Ardith Spence, Sara Sullivan, and Sonya Vartivarian. Intellectual Property: Assessing Factors That Affect Patent Infringement Litigation Could Help Improve Patent Quality. GAO-13-465. Washington, D.C.: Aug. 22, 2013. U.S. Patent and Trademark Office: Performance Management Processes. GAO-10-946R. Washington, D.C.: Sept. 24, 2010. Intellectual Property: Enhanced Planning by U.S. Personnel Overseas Could Strengthen Efforts. GAO-09-863. Washington, D.C.: Sept. 30, 2009. U.S. Patent and Trademark Office: Hiring Efforts Are Not Sufficient to Reduce the Patent Application Backlog GAO-08-527T. Washington, D.C.: Feb. 27, 2008. U.S. Patent And Trademark Office: Hiring Efforts Are Not Sufficient to Reduce the Patent Application Backlog. GAO-07-1102. (Washington, D.C.: Sept. 4, 2007. Intellectual Property: Improvements Needed to Better Manage Patent Office Automation and Address Workforce Challenges. GAO-05-1008T. Washington, D.C.: Sept. 8, 2005. Intellectual Property: Key Processes for Managing Patent Automation Strategy Need Strengthening. GAO-05-336. Washington, D.C.: June 17, 2005. Intellectual Property: USPTO Has Made Progress in Hiring Examiners, but Challenges to Retention Remain. GAO-05-720. Washington, D.C.: June 17, 2005. | Resolving disputes over patent infringement and validity in court often costs millions of dollars. Legal scholars and economists have raised concerns about an increase in the numbers of low quality patents—such as those that are unclear and overly broad—which may lead to an increase in patent infringement suits and can hinder innovation by blocking new ideas from entering the marketplace. GAO was asked to review issues related to patent quality. GAO examined (1) recent trends in patent infringement litigation and (2) what additional opportunities exist, if any, to improve patent quality. GAO reviewed relevant laws and agency documents; analyzed patent infringement litigation data from 2007 through 2015; conducted a survey of a generalizable sample of USPTO examiners; and interviewed officials from USPTO and knowledgeable stakeholders, including legal scholars, technology companies, and patent attorneys, among others. GAO found that district court filings of new patent infringement lawsuits increased from about 2,000 in 2007 to more than 5,000 in 2015, while the number of defendants named in these lawsuits increased from 5,000 to 8,000 over the same period. In 2007, about 20 percent of all defendants named in new patent infringement lawsuits were sued in the Eastern District of Texas, and by 2015 this had risen to almost 50 percent. According to stakeholders, patent infringement suits are increasingly being tried in the predominantly rural Eastern District of Texas, likely due to recent practices in that district that are favorable to the patent owners who bring these infringement suits. GAO also found that most patent suits involve software-related patents and computer and communications technologies. Several stakeholders told GAO that it is easy to unintentionally infringe on patents associated with these technologies because the patents can be unclear and overly broad, which several stakeholders believe is a characteristic of low patent quality. The U.S. Patent and Trademark Office (USPTO) has taken actions to address patent quality, most notably through its Enhanced Patent Quality Initiative, but there are additional opportunities for the agency to improve patent quality. For example, USPTO does not currently have a consistent definition for patent quality articulated in agency documents and guidance, which would be in line with federal internal-control standards and best practices for organizational performance. Most stakeholders GAO interviewed said they would define a quality patent as one that would meet the statutory requirements for novelty and clarity, among others, and would be upheld if challenged in a lawsuit or other proceeding. Without a consistent definition, USPTO is unable to fully measure progress toward meeting its patent quality goals. Additionally, USPTO has not fully assessed the effects of the time allotted for application examinations or monetary incentives for examiners on patent quality. Specifically, most stakeholders GAO interviewed said that time pressures on examiners are a central challenge for patent quality. Based on GAO's survey of patent examiners, GAO estimates that 70 percent of the population of examiners say they do not have enough time to complete a thorough examination given a typical workload. According to federal standards for internal control, agencies should provide staff with the right structure, incentives, and responsibilities to make operational success possible. Without assessing the effects of current incentives for examiners or the time allotted for examination, USPTO cannot be assured that its time allotments and incentives support the agency's patent quality goals. Finally, USPTO does not currently require applicants to define key terms or make use of additional tools to ensure patent clarity. Federal statutes require that patent applications use clear, concise, and exact terms. Based on a survey of patent examiners, GAO estimates that nearly 90 percent of examiners always or often encountered broadly worded patent applications, and nearly two-thirds of examiners said that this made it difficult to complete a thorough examination. Without making use of additional tools, such as a glossary of key terms, to improve the clarity of patent applications, USPTO is at risk of issuing patents that do not meet statutory requirements. GAO makes seven recommendations, including that USPTO more consistently define patent quality and articulate that definition in agency documents and guidance, reassess the time allotted for examination, analyze the effects of incentives on patent quality, and consider requiring applicants to use additional clarity tools. USPTO generally agreed with GAO's findings, concurred with the recommendations, and provided information on steps officials plan to take to implement the recommendations. |
The same speed and accessibility that create the enormous benefits of the computer age can, if not properly controlled, allow individuals and organizations to inexpensively eavesdrop on or interfere with computer operations from remote locations for mischievous or malicious purposes, including fraud or sabotage. In recent years, the sophistication and effectiveness of cyberattacks have steadily advanced. Government officials are increasingly concerned about attacks from individuals and groups with malicious intent, such as criminals, terrorists, and nation-states. As we reported in June 2007, cybercrime has significant economic impacts and threatens U.S. national security interests. Various studies and experts estimate the direct economic impact from cybercrime to be in the billions of dollars annually. In addition, there is continued concern about the threat that our adversaries, including nation-states and terrorists, pose to our national security. For example, intelligence officials have stated that nation-states and terrorists could conduct a coordinated cyber attack to seriously disrupt electric power distribution, air traffic control, and financial sectors. In May 2007, Estonia was the reported target of a denial-of-service cyber attack with national consequences. The coordinated attack created mass outages of its government and commercial Web sites. To address threats posed against the nation’s computer-reliant infrastructures, federal law and policy establishes DHS as the focal point for cyber CIP. For example, within DHS, the Assistant Secretary of Cyber Security and Communications is responsible for being the focal point for national cyber CIP efforts. Under the Assistant Secretary is NCSD which interacts on a day-to-day basis with federal and nonfederal agencies and organizations (e.g., state and local governments, private-sector companies) regarding, among other things, cyber-related analysis, warning, information sharing, major incident response, and national-level recovery efforts. Consequently, DHS has multiple cybersecurity-related roles and responsibilities. In May 2005, we identified, and reported on, 13 key cybersecurity responsibilities called for in law and policy. These responsibilities are described in appendix I. Since then, we have performed detailed work and made recommendations on DHS’s progress in fulfilling specific aspects of the responsibilities, as discussed in more detail later in this statement. In addition to DHS efforts to fulfill its cybersecurity responsibilities, the President in January 2008 issued HSPD 23—also referred to as National Security Presidential Directive 54 and the President’s “Cyber Initiative”—to improve DHS and the other federal agencies’ cybersecurity efforts, including protecting against intrusion attempts and better anticipating future threats. While the directive has not been made public, DHS officials stated that the initiative includes steps to enhance cyber analysis related efforts, such as requiring federal agencies to implement a centralized network monitoring tool and reduce the number of connections to the Internet. Over the last several years, we have reported that DHS has yet to comprehensively satisfy its key cybersecurity responsibilities. These reports included about 30 recommendations that we summarized into the following key areas that are essential for DHS to address in order to fully implement its cybersecurity responsibilities. In July 2008, we identified that cyber analysis and warning capabilities included (1) monitoring network activity to detect anomalies, (2) analyzing information and investigating anomalies to determine whether they are threats, (3) warning appropriate officials with timely and actionable threat and mitigation information, and (4) responding to the threat. These four capabilities are comprised of 15 key attributes, which are detailed in appendix II. We concluded that while US-CERT demonstrated aspects of each of the key attributes, it did not fully incorporate all of them. For example, as part of its monitoring, US-CERT obtained information from numerous external information sources; however, it had not established a baseline of our nation’s critical network assets and operations. In addition, while it investigated if identified anomalies constitute actual cyber threats or attacks as part of its analysis, it did not integrate its work into predictive analyses of broader implications or potential future attacks, nor does it have the analytical or technical resources to analyze multiple, simultaneous cyber incidents. The organization also provided warnings by developing and distributing a wide array of attack and other notifications; however, these notifications were not consistently actionable or timely—providing the right information to the right persons or groups as early as possible to give them time to take appropriate action. Further, while it responded to a limited number of affected entities in their efforts to contain and mitigate an attack, recover from damages, and remediate vulnerabilities, the organization did not possess the resources to handle multiple events across the nation. We also concluded that without the key attributes, US-CERT did not have the full complement of cyber analysis and warning capabilities essential to effectively perform its national mission. As a result, we made 10 recommendations to the department to address shortfalls associated with the 15 attributes in order to fully establish a national cyber analysis and warning capability. DHS concurred with 9 of our 10 recommendations. In June 2008, we reported on the status of DHS’s efforts to establish an integrated operations center that it agreed to adopt per recommendations from a DHS-commissioned expert task force. The two operations centers that were to be integrated were within the department’s National Communication System and National Cyber Security Division. We determined that DHS had taken the first of three steps towards integrating the operations centers—called the National Coordination Center Watch and US-CERT—it uses to plan for and monitor voice and data network disruptions. While DHS completed the first integration step by locating the two centers in adjacent space, it had yet to implement the remaining two steps. Specifically, although called for in the task force’s recommendations, the department had not organizationally merged the two centers or involved key private sector critical infrastructure officials in the planning, monitoring, and other activities of the proposed joint operations center. In addition, the department lacked a strategic plan and related guidance that provides overall direction in this area and has not developed specific tasks and milestones for achieving the two remaining integration steps. We concluded that until the two centers were fully integrated, DHS was at risk of being unable to efficiently plan for and respond to disruptions to communications infrastructure and the data and applications that travel on this infrastructure, increasing the probability that communications will be unavailable or limited in times of need. As a result, we recommended that the department complete its strategic plan and define tasks and milestones for completing remaining integration steps so that we are better prepared to provide an integrated response to disruptions to the communications infrastructure. DHS concurred with our first recommendation and stated that it would address the second recommendation as part of finalizing its strategic plan. DHS has recently made organizational changes to bolster its cybersecurity focus. For example, in response to the President’s January 2008 Cyber Initiative, the department established a National Cybersecurity Center to ensure coordination among cyber-related efforts across the federal government. DHS placed the center at a higher organizational level than the Assistant Secretary of Cyber Security and Communications. As we previously reported, this placement raises questions about, and may in fact, diminish the Assistant Secretary’s authority as the focal point for the federal government’s cyber CIP efforts. It also raises similar questions about NCSD’s role as the primary federal cyber analysis and warning organization. In September 2008, we reported on a 2006 major DHS-coordinated cyber attack exercise, called Cyber Storm, that included large scale simulations of multiple concurrent attacks involving the federal government, states, foreign governments, and private industry. We determined that DHS had identified eight lessons learned from this exercise, such as the need to improve interagency coordination groups and the exercise program. We also concluded that while DHS had demonstrated progress in addressing the lessons learned, more needed to be done. Specifically, while the department completed 42 of the 66 activities identified to address the lessons learned, it identified 16 activities as ongoing and 7 as planned for the future. In addition, DHS provided no timetable for the completion dates of the ongoing activities. We noted that until DHS scheduled and completed its remaining activities, it was at risk of conducting subsequent exercises that repeated the lessons learned during the first exercise. Consequently, we recommended that DHS schedule and complete the identified corrective activities so that its cyber exercises can help both public and private sector participants coordinate their responses to significant cyber incidents. DHS agreed with the recommendation. In 2007, we reported and testified on the cybersecurity aspects of CIP plans for 17 critical infrastructure sectors, referred to as sector- specific plans. Specifically, we found that none of the plans fully addressed the 30 key cybersecurity-related criteria described in DHS guidance. We also determined that while several sectors’ plans fully addressed many of the criteria, others were less comprehensive. In addition to the variations in the extent to which the plans covered aspects of cybersecurity, there was also variance among the plans in the extent to which certain criteria were addressed. For example, fewer than half of the plans fully addressed describing (1) a process to identify potential consequences of cyber attack or (2) any incentives used to encourage voluntary performance of risk assessments. We noted that without complete and comprehensive plans, stakeholders within the infrastructure sectors may not adequately identify, prioritize, and protect their critical assets. Consequently, we recommended that DHS request that the lead federal agencies, referred to as sector-specific agencies, that are responsible for the development of CIP plans for their sectors fully address all cyber-related criteria by September 2008 so that stakeholders within the infrastructure sectors will effectively identify, prioritize, and protect the cyber aspects of their CIP efforts. The updated plans are due this month. In a September 2007 report and October 2007 testimony, we identified that federal agencies had initiated efforts to improve the security of critical infrastructure control systems—computer-based systems that monitor and control sensitive processes and physical functions. For example, DHS was sponsoring multiple control systems security initiatives, including efforts to (1) improve control systems cybersecurity using vulnerability evaluation and response tools and (2) build relationships with control systems vendors and infrastructure asset owners. However, the department had not established a strategy to coordinate the various control systems activities across federal agencies and the private sector. Further, it lacked processes needed to address specific weaknesses in sharing information on control system vulnerabilities. We concluded that until public and private sector security efforts are coordinated by an overarching strategy and specific information sharing shortfalls are addressed, there was an increased risk that multiple organizations would conduct duplicative work and miss opportunities to fulfill their critical missions. Consequently, we recommended that DHS develop a strategy to guide efforts for securing control systems and establish a rapid and secure process for sharing sensitive control system vulnerability information to improve federal government efforts to secure control systems governing critical infrastructure. In response, DHS officials took our recommendations under advisement and more recently have begun developing a Federal Coordinating Strategy to Secure Control Systems, which is still a work in process. In addition, while DHS began developing a process to share sensitive information; it has not provided any evidence that the process has been implemented or that it is an effective information sharing mechanism. We reported and later testified in 2006 that the department had begun a variety of initiatives to fulfill its responsibility for developing an integrated public/private plan for Internet recovery. However, we determined that these efforts were not comprehensive or complete. As such, we recommended that DHS implement nine actions to improve the department’s ability to facilitate public/private efforts to recover the Internet in case of a major disruption. In October 2007, we testified that the department had made progress in implementing our recommendations; however, seven of the nine have not been completed. For example, it revised key plans in coordination with private industry infrastructure stakeholders, coordinated various Internet recovery-related activities, and addressed key challenges to Internet recovery planning. However, it had not, among other things, finalized recovery plans and defined the interdependencies among DHS’s various working groups and initiatives. In other words, it has not completed an integrated private/public plan for Internet recovery. As a result, we concluded that the nation lacked direction from the department on how to respond in such a contingency. We also noted that these incomplete efforts indicated DHS and the nation were not fully prepared to respond to a major Internet disruption. In summary, DHS has developed and implemented capabilities to satisfy aspects of key cybersecurity responsibilities. However, it still needs to take further action to fulfill all of these responsibilities. In particular, it needs to fully address the key areas identified in our recent reports. Specifically, it will have to bolster cyber analysis and warning capabilities, address organizational inefficiencies by integrating voice and data operations centers, enhance cyber exercises by completing the identified activities associated with the lessons learned, ensure that cyber-related sector-specific critical infrastructure plans are completed, improve efforts to address the cybersecurity of infrastructure control systems by completing a comprehensive strategy and ensuring adequate mechanisms for sharing sensitive information, and strengthen its ability to help recover from Internet disruptions by finalizing recovery plans and defining interdependencies. Until these steps are taken, our nation’s computer-reliant critical infrastructure remains at unnecessary risk of significant cyber incidents. Mr. Chairman, this concludes my statement. I would be happy to answer any questions that you or members of the subcommittee may have at this time. If you have any questions on matters discussed in this testimony, please contact me at (202) 512-9286, or by e-mail at pownerd@gao.gov. Other key contributors to this testimony include Camille Chaires, Michael Gilmore, Rebecca LaPaze, Kush Malhotra, and Gary Mountjoy. Developing a comprehensive national plan for securing the key resources and critical infrastructure of the United States, including information technology and telecommunications systems (including satellites) and the physical and technological assets that support such systems. This plan is to outline national strategies, activities, and milestones for protecting critical infrastructures. Fostering and developing public/private partnerships with and among other federal agencies, state and local governments, the private sector, and others. DHS is to serve as the “focal point for the security of cyberspace.” Improving and enhancing information sharing with and among other federal agencies, state and local governments, the private sector, and others through improved partnerships and collaboration, including encouraging information sharing and analysis mechanisms. DHS is to improve sharing of information on cyber attacks, threats, and vulnerabilities. Providing cyber analysis and warnings, enhancing analytical capabilities, and developing a national indications and warnings architecture to identify precursors to attacks. Providing crisis management in response to threats to or attacks on critical information systems. This entails coordinating efforts for incident response, recovery planning, exercising cybersecurity continuity plans for federal systems, planning for recovery of Internet functions, and assisting infrastructure stakeholders with cyber-related emergency recovery plans. Leading efforts by the public and private sector to conduct a national cyber threat assessment, to conduct or facilitate vulnerability assessments of sectors, and to identify cross-sector interdependencies. Leading and supporting efforts by the public and private sector to reduce threats and vulnerabilities. Threat reduction involves working with the law enforcement community to investigate and prosecute cyberspace threats. Vulnerability reduction involves identifying and remediating vulnerabilities in existing software and systems. Collaborating and coordinating with members of academia, industry, and government to optimize cybersecurity-related research and development efforts to reduce vulnerabilities through the adoption of more secure technologies. Establishing a comprehensive national awareness program to promote efforts to strengthen cybersecurity throughout government and the private sector, including the home user. Improving cybersecurity-related education, training, and certification opportunities. Partnering with federal, state, and local governments in efforts to strengthen the cybersecurity of the nation’s critical information infrastructure to assist in the deterrence, prevention, preemption of, and response to terrorist attacks against the United States. Working in conjunction with other federal agencies, international organizations, and industry in efforts to promote strengthened cybersecurity on a global basis. Coordinating and integrating applicable national preparedness goals with its National Infrastructure Protection Plan. Attribute Establish a baseline understanding of network assets and normal network traffic volume and flow Assess risks to network assets Obtain internal information on network operations via technical tools and user reports Obtain external information on threats, vulnerabilities, and incidents through various relationships, alerts, and other sources Detect anomalous activities Verify that an anomaly is an incident (threat of attack or actual attack) Investigate the incident to identify the type of cyber attack, estimate impact, and collect evidence Identify possible actions to mitigate the impact of the incident Integrate results into predictive analysis of broader implications or potential future attack Develop attack and other notifications that are targeted and actionable Provide notifications in a timely manner Distribute notifications using appropriate communications methods Contain and mitigate the incident Recover from damages and remediate vulnerabilities Evaluate actions and incorporate lessons learned This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Recent cyber attacks demonstrate the potentially devastating impact these pose to our nation's computer systems and to the federal operations and critical infrastructures that they support. They also highlight that we need to be vigilant against individuals and groups with malicious intent, such as criminals, terrorists, and nation-states perpetuating these attacks. Federal law and policy established the Department of Homeland Security (DHS) as the focal point for coordinating cybersecurity, including making it responsible for protecting systems that support critical infrastructures, a practice commonly referred to as cyber critical infrastructure protection. Since 2005, GAO has reported on the responsibilities and progress DHS has made in its cybersecurity efforts. GAO was asked to summarize its key reports and their associated recommendations aimed at securing our nation's cyber critical infrastructure. To do so, GAO relied on previous reports, as well as two reports being released today, and analyzed information about the status of recommendations. GAO has reported over the last several years that DHS has yet to fully satisfy its cybersecurity responsibilities. To address these shortfalls, GAO has made about 30 recommendations in the following key areas. Specifically, examples of what GAO reported and recommended are as follows: (1) Cyber analysis and warning--In July 2008, GAO reported that DHS's United States Computer Emergency Readiness Team (US-CERT) did not fully address 15 key cyber analysis and warning attributes. For example, US-CERT provided warnings by developing and distributing a wide array of notifications; however, these notifications were not consistently actionable or timely. Consequently, GAO recommended that DHS address these attribute shortfalls. (2) Cyber exercises--In September 2008, GAO reported that since conducting a cyber attack exercise in 2006, DHS demonstrated progress in addressing eight lessons it learned from this effort. However, its actions to address the lessons had not been fully implemented. GAO recommended that the department schedule and complete all identified corrective activities. (3) Control systems--In a September 2007 report and October 2007 testimony, GAO identified that DHS was sponsoring multiple efforts to improve control system cybersecurity using vulnerability evaluation and response tools. However, the department had not established a strategy to coordinate this and other efforts across federal agencies and the private sector, and it did not effectively share control system vulnerabilities with others. Accordingly, GAO recommended that DHS develop a strategy to guide efforts for securing such systems and establish a process for sharing vulnerability information. While DHS has developed and implemented capabilities to address aspects of these areas, it still has not fully satisfied any of them. Until these and other areas are effectively addressed, our nation's cyber critical infrastructure is at risk of increasing threats posed by terrorists, nation-states, and others. |
The United States essentially relies on a two-step process to prevent inadmissible people from entering the country. The Bureau of Consular Affairs in the State Department is responsible for issuing international travel documents, such as passports to United States citizens and visas to citizens of other countries. On March 1, 2003, the Bureau of Customs and Border Protection in the Department of Homeland Security assumed responsibility for inspecting travelers at and between ports of entry. Inspectors from the Immigration and Naturalization Service (INS), the U.S. Customs Service, and the Animal and Plant Health Inspection Service (APHIS) were brought together in this new bureau. In fiscal year 2002, there were about 440 million border crossings into the United States at over 300 designated ports of entry (see table 1). Of the more than 358 million border crossers who entered through land ports of entry, almost 50 million entered as pedestrians. The rest entered in more than 131 million vehicles, including cars, trucks, buses, and trains. Further, the State Department processed about 8.4 million nonimmigrant visa applications and issued about 7 million passports. The term biometrics covers a wide range of technologies that can be used to verify a person’s identity by measuring and analyzing his or her physiological characteristics, based on data derived from measuring a part of the body directly. For example, technologies have been developed to measure a person’s finger, hand, face, retina, and iris. Biometric systems are essentially pattern recognition systems. They use electronic or optical sensors such as cameras and scanning devices to capture images, recordings, or measurements of a person’s characteristics and computer hardware and software to extract, encode, store, and compare these characteristics. Using biometrics as identifiers for border security purposes appears to be appealing because they can help tightly bind a traveler to his or her identity by using physiological characteristics. Unlike other identification methods, such as identification cards or passwords, biometrics are less easily lost, stolen, or guessed. The binding is dependent on the quality of the identification document presented by the traveler to enroll in the biometric system. If the identification document does not specify the traveler’s true identity, the biometric data will be linked to a false identity. In our work last year, we examined several different biometric technologies and found four to be suitable for border control systems: fingerprint recognition, facial recognition, iris recognition, and hand geometry. Other biometric technologies were determined to be impractical in a border control application because of accuracy or user acceptance issues. For example, speaker recognition systems do not perform well in noisy environments and do not appear to be sufficiently distinctive to permit identification of an individual within a large database of identities. We defined four different scenarios in which biometric technologies could be used to support border control operations. Two scenarios use a biometric watch list to identify travelers who are inadmissible to the United States (1) before issuing travel documents and (2) before travelers enter the country. The other two scenarios help bind the claimed identity of travelers to their travel documents by incorporating biometrics into (1) U.S. visas or (2) U.S. passports. Linking an individual’s identity to a U.S. travel document could help reduce the use of counterfeit documents and imposters’ fraudulent use of legitimate documents. Biometrics have been used in border control environments for several years. For example, the INS Passenger Accelerated Service System (INSPASS), a hand geometry system first installed in 1993, has been used in seven U.S. and two Canadian airports to reduce inspection time for trusted travelers. Since April 1998, border crossing cards, also called laser visas, have been issued to Mexican citizens that include their photograph and prints of the two index fingers. The Automated Biometric Fingerprint Identification System (IDENT) is used by DHS to identify aliens who are repeatedly apprehended trying to enter the United States illegally. IDENT is also being used as a part of the National Security Entry-Exit Registration System (NSEERS) that was implemented last year. Laws passed in the last 2 years require a more extensive use of biometrics for border control. The Attorney General and the Secretary of State jointly, through the National Institute of Standards and Technology (NIST) are to develop a technology standard, including biometric identifier standards. When developed, this standard is to be used to verify the identity of persons applying for a U.S. visa for the purpose of conducting a background check, confirming identity, and ensuring that a person has not received a visa under a different name. By October 26, 2004, the Departments of State and Justice are to issue to aliens only machine- readable, tamper-resistant visas and other travel and entry documents that use biometric identifiers. At the same time, Justice is to install at all ports of entry equipment and software that allow the biometric comparison and authentication of all U.S. visas and other travel and entry documents issued to aliens and machine-readable passports. While biometric technology is currently available and used in a variety of applications, questions remain regarding the technical and operational effectiveness of biometric technologies in applications as large as border control. In addition, before implementing any biometric border control system, a number of other issues would have to be considered including: The system’s effect on existing border control procedures and people. Technology is only part of an overall security solution and only as effective as the procedures within which it operates. The costs and benefits of the system, including secondary costs resulting from changes in processes or personnel to accommodate the biometrics. The system’s effect on privacy, convenience, and the economy. The successful implementation of any technology depends not only on the performance of the technology but also on the operational processes that employ the technology and the people who execute them. The implementation of biometrics in border security is no exception. Further, the use of technology alone is not a panacea for the border security problem. Instead, biometric technology is just a piece of the overall decision support system that helps determine whether to allow a person into the United States. The first decision is whether to issue travelers a U.S. travel document. The second decision, made at the ports of entry, is whether to admit travelers into the country. Biometrics can play a role in both decisions. Sorting the admissible travelers from the inadmissible ones is currently conducted by using information systems for checking names against watch lists and by using manual human recognition capabilities to see if the photograph on a travel document matches the person who seeks entry to the United States. When enabled with biometrics, automated systems can verify the identity of the traveler and assist inspectors in their decision making. However, a key factor that must be considered is the performance of the biometric technology. For example, if the biometric technology that is used to perform watch list checks before visas are issued has a high rate of false matches, the visa processing workload could increase at the embassies and consulates. If the same biometric solution were used at the ports of entry, it could lead to increased delays in the inspection process and an increase in the number of secondary inspections. Exception processing will also have to be carefully considered. Exceptions would include people who fail to enroll in the biometric visa system or are not correctly matched by it. Exception processing that is not as good as biometric-based primary processing could be exploited as a security hole. Failure of equipment must also be considered and planned for. Further, to issue visas with biometrics, an appropriate transition strategy must be devised to simultaneously handle both visas with biometrics and the current visa that could remain valid without biometrics for up to the next 10 years. Before any significant project investment is made, the benefit and cost information of the project alternatives should be analyzed and assessed in detail. A clear statement of the high-level system goals should drive the overall concept of a U.S. border control system. System goals address the system’s expected outcomes and are usually based on business or public policy needs, which for a border control system could include items such as binding a biometric feature to a person’s identity on a travel document, identifying undesirable persons on a watch list, checking for duplicate enrollments in the system, verifying identities at the borders, ensuring the security of the biometric data, and ensuring the adequacy of privacy protections. The benefits gained from a biometric border control system should be based on how well the system achieves the high-level goals. A concept of operations should be developed that embodies the people, process, and technologies required to achieve the goals. To put together the concept of operations, a number of inputs have to be considered, including legal requirements, existing processes and infrastructure used, and known technology limitations. Performance requirements should also be included in the concept of operations, such as processing times. Business process reengineering, such as new processes to conduct inspections of passengers in vehicles or to maintain a database of biometric data, would also be addressed in the concept of operations. As we have noted, the desired benefit is the prevention of the entry of travelers who are inadmissible to the United States. More specifically, the use of a biometric watch list can provide an additional check to name- based checks and can help detect travelers who have successfully established separate names and identities and are trying to evade detection. The use of visas with biometrics can help positively identify travelers as they enter the United States and can limit the use of fraudulent documents, including counterfeit and modified documents, and impostors’ use of legitimate documents. However, the benefits gained by using biometric have several limitations. First, the benefit achieved is directly related to the performance of the biometric technology. The performance of facial, fingerprint, and iris recognition is unknown for systems as large as a biometric visa system that would require storage and comparison against 100 million to 240 million records. The largest facial, fingerprint, and iris recognition systems contain 60 million, 40 million, and 30,000 records, respectively. The population of the biometric watch list is critical to its effectiveness. Policies and procedures would need to be developed for adding and maintaining records in the watch list database. Key questions that have to be answered include who is added to the watch list, how someone is removed from the watch list, and how errors could be corrected. Successfully identifying people on the biometric watch list is also dependent on the effectiveness of the law enforcement and intelligence communities in identifying individuals who should be placed on the watch list. Issuing visas with biometrics will only assist in identifying those currently required to obtain visas to enter this country. For example, Canadians, Mexicans with border crossing cards, and foreign nationals participating in the visa waiver program do not have to have a visa to enter the United States. The issuance of visas with biometrics is also dependent on establishing the correct identity during enrollment. This process typically depends on the presentation of identification documents. If the documents do not specify the applicant’s true identity, then the travel document will be linked to a false identity. Further, biometric technology is not a solution to all border security problems. Biometric technology can address only problems associated with identifying travelers at official locations such as embassies and ports of entry. While the technology can help reduce the number of illegal immigrants who cross with fraudulent documents, it cannot help with illegal immigrants who cross between the ports of entry. INS has previously estimated that up to 60 percent of the 275,000 new illegal immigrants a year do not present themselves at a port of entry to enter the United States. In addition, biometrics cannot help to identify foreign nationals who enter through ports of entry and are properly admitted by an inspector but may overstay their visit. The costs of any proposed system must be considered. Both initial costs and recurring costs need to be estimated. Initial costs need to account for the engineering efforts to design, develop, test, and implement the system; training of personnel; hardware and software costs; network infrastructure improvements; and additional facilities required to enroll people into the biometric system. Recurring cost elements include program management costs, hardware and software maintenance, hardware replacement costs, training of personnel, additional personnel to enroll or verify the identities of travelers in the biometric system, and possibly the issuance of token cards for the storage of biometrics collected for issuing visas. While specific cost estimates depend on the detailed assumptions made for the concept of operations, the costs are significant. The Privacy Act of 1974 limits federal agencies’ collection, use, and disclosure of personal information, such as fingerprints and photographs. Accordingly, the Privacy Act generally covers federal agency use of personal biometric information. However, as a practical matter, the act is likely to have a more limited application for border security. First, the act applies only to U.S. citizens and lawfully admitted permanent residents. Second, the act includes exemptions for law enforcement and national security purposes. Representatives of civil liberties groups and privacy experts have expressed concerns regarding (1) the adequacy of protections for security, data sharing, identity theft, and other identified uses of biometric data and (2) secondary uses and “function creep.” These concerns relate to the adequacy of protections under current law for the large-scale data handling in a biometric system. Besides information security, concern was voiced about an absence of clear criteria for governing data sharing. The broad exemptions of the Privacy Act, for example, provide no guidance on the extent of the appropriate uses law enforcement may make of biometric information. Because there is no general agreement on the appropriate balance of security and privacy to build into a system using biometrics, further policy decisions are required. The range of unresolved policy issues suggests that questions surrounding the use of biometric technology center as much on management policies as on technical issues. The use of biometric technologies could potentially impact the length of the inspection process. Any lengthening in the process of obtaining travel documents or entering the United States could affect travelers significantly. At some consular posts, visas are issued the day applications are received. Even without biometrics, the busiest ports of entry regularly have delays of 2 to 3 hours. Increases in inspection times could compound these delays. Delays inconvenience travelers and could result in fewer visits to the United States or lost business to the nation. Further studies will be necessary to measure what the potential effect could be on the American economy and, in particular, on the border communities. These communities depend on trade with Canada and Mexico, which totaled $653 billion in 2000. The use of biometrics in a border control system in the United States could affect the number of international visitors and how other countries treat visitors from the United States. Much visa issuance policy is based on reciprocity—that is, the process for allowing a country’s citizens to enter the United States would be similar to the process followed by that country when U.S. citizens travel there. If the United States requires biometric identifiers when citizens of other countries apply for a visa, those countries may require U.S. citizens to submit a biometric when applying for a visa to visit their countries. Similarly, if the United States requires other countries to collect biometrics from their citizens and store the data with their passport for verification when they travel here, they may require the United States to place a biometric in its passports as well. As more countries require the use of biometrics to cross their borders, there is a potential for different biometrics to be required for entering different countries or for the growth of multiple databases of biometrics. Unless all countries agree on standard biometrics and standard document formats, a host of biometric scanners might be required at U.S. and other ports of entry. The International Civil Aviation Organization plans to standardize biometric technology for machine-readable travel documents, but biometric data-sharing arrangements between the United States and other countries would also be required. In January 2003, as required by the USA PATRIOT Act and the Enhanced Border Security and Visa Entry Reform Act, the Attorney General, the Secretary of State, and NIST jointly submitted a report that focuses on specific legislative requirements related to interoperable databases, biometric identifiers, and travel document authentication for entry only. The report discusses the current border control process, the need for a new approach, and identifies several issues that need to be addressed to make a more extensive use of biometrics in automated border control systems. As a part of this report, NIST developed technical standards for biometric identifiers and tamper-resistance for travel documents. NIST reported that facial recognition and fingerprint recognition are the only biometric technologies with sufficiently large operational databases for testing at this time. NIST concluded that while iris recognition is a promising candidate, it requires collection of a large test database to test the uniqueness of iris data for large samples. NIST recommends that 10 fingerprints be used for background identification, and a dual biometric system using 2 fingerprint images and a face image may be needed to meet projected system requirements for verification. For tamper-resistance, NIST recommended the use of a public key infrastructure to authenticate the source of travel documents. According to the report, the Attorney General and the Secretary of State have agreed to use a live-capture digital photograph and fingerprints for identity enrollment, background checks, and identity verification. However, the exact number of fingerprints required at enrollment has not been finalized. The report identifies several issues and considerations that need to be further evaluated and resolved. The resolution of these issues will have significant operational, technical, and cost implications. According to the report, if the various stakeholders of this cross-agency effort do not work out these details before major investments are made, the estimated cost and expected results of the investment will be at risk. Further, the report states that due to the size and complexity of the effort, the deployment schedule will need to be delayed at least 1 year from the October 26, 2004, target date established in the legislation. Many of the issues identified in the report are consistent with the challenges we identified in our work last year. For example, the report discusses the need to change the end-to-end business process to incorporate the enrollment and verification of biometric information from travelers. Further, the report cites the need to improve border security without a major adverse effect on tourism, commerce, and border traffic flow. Privacy issues and the effect on international relations are also addressed. Exception processing is discussed. According to the report, approximately 2 percent of the population cannot provide good fingerprint images. As a result, an alternate enrollment and identification procedure will be required for these people. To develop the biometric border control system, the report estimates it would cost about $3.8 billion including initial and recurring costs over a six-year period. The report cites a number of steps that need to be taken by a cross-agency project team to clarify the scope, costs, benefits, and schedule required to implement the legislative requirement. For example, the report cites the need to develop a cross-agency concept of operations for the entire end-to- end process that would guide the scoping, requirements definition, and trade-off analyses required to develop and deploy the system. The concept of operations would also help determine how the proposed solution can balance identity verification and efficient traffic flow objectives at land borders. The report also discusses the need to update the overall costs and benefits of the solution to confirm that the effort will achieve the benefits desired at an acceptable cost. Steps will also need to be taken to align U.S. biometric standards with those of other countries, particularly visa-waiver countries, in a manner consistent with the concept of operations. Finally, the report cites the need to define and establish a cross-agency program management and governance structure to drive the business change and deployment associated with this effort. As the Department of Homeland Security and other agencies consider a biometrics-based border security concept of operations, they may need to address current challenges that we have observed during our ongoing work at land ports of entry. At a minimum, these challenges represent potential implementation issues that could affect the security benefits intended by the new border security system. These challenges include: Integrity of the Inspections Process. The need to balance the dual objectives of identifying those who should not be permitted entry into the country and keeping traffic and trade flowing through the ports creates potential weaknesses in the process that biometrics can help resolve but not entirely. For example, we recently reported on our ability to enter the country at ports of entry with erroneous answers to inspector questions and counterfeit identification. Also, at land ports of entry, computer checks are made on the vehicle that travelers arrive in but not on the driver and passengers unless inspectors suspect wrongdoing. Moreover, we observed that new security procedures aimed at increasing process integrity were not consistently followed. With respect to alternative inspection programs, various trusted traveler programs, intended to process large numbers of pre-screened travelers quickly so that inspectors can devote more time to travelers whose risk is unknown, can be strengthened through wider use of biometrics. Some current programs are not attractive to many travelers because the cost of participation does not ensure time savings when crossing the border. Providing Technology and Equipment to Inspectors. Some current border operations are time-consuming because inspectors must separately log on and off of several lookout databases that need to be checked when more intensive, or secondary, inspections are required. This could increase the risk that an inspector might overlook valuable information. Further, inspectors still perform many routine administrative processes by hand, although some ports of entry have successfully automated some of these manual processes. Once the concept of operations for a new border security system is adopted, extensive introduction of new equipment and automated processes will require extensive training and reinforcement. Access to Intelligence Information. The amount of intelligence information border inspectors currently receive in a single day can be overwhelming, and inspectors report that they do not have enough time to read it. Further, because of the need to staff inspection lanes, some ports of entry reported not having time to conduct daily intelligence and safety briefings, as required. Ensuring that intelligence information is relevant, and that inspectors have sufficient time to review and absorb it, will present a significant challenge for a new border security system. Adequate and Consistent Inspector Training. Merging INS and Customs inspectors into a single shared inspection force will be a significant challenge because INS and Customs train their inspectors at two separate academies using two different curricula with little time devoted to learning each other’s laws and regulations. In addition, training, particularly of new inspectors, is a continuing need after deployment of inspectors, but the pressures of inspection itself has taken precedence over both on-the-job training and formal training at some ports. In conclusion, biometric technologies are available today that can be used for border security. However, it is important to bear in mind that effective security cannot be achieved by relying on technology alone. Technology and people must work together as part of an overall security process. As we have pointed out, weaknesses in any of these areas, such as those we identified at land ports of entry, diminishes the effectiveness of the security process. We have found that three key considerations need to be addressed before a decision is made to design, develop, and implement biometrics into a border control system: 1. Decisions must be made on how the technology will be used. 2. A detailed cost-benefit analysis must be conducted to determine that the benefits gained from a system outweigh the costs. 3. A trade-off analysis must be conducted between the increased security, which the use of biometrics would provide, and the effect on areas such as privacy and the economy. A report recently issued jointly by the Attorney General, Secretary of State, and NIST agrees with these considerations. As DHS and other agencies consider the development of a border security system with biometrics, they need to define what the high-level goals of this system will be and develop the concept of operations that will embody the people, process, and technologies required to achieve these goals. With these answers, the proper role of biometric technologies in border security can be determined. If these details are not resolved, the estimated cost and performance of the resulting system will be at risk. Mr. Chairmen, this concludes my statement. I would be pleased to answer any questions that you or members of the subcommittees may have. For further information, please contact Nancy Kingsbury, Managing Director, Applied Research and Methods, at (202) 512-2700, or Richard Stana, Director, Homeland Security and Justice, at (202) 512-8777. Individuals making key contributions to this testimony include Yvette Banks, Naba Barkakati, Michael Dino, Barbara Guffy, Richard Hung, Rosa Lin, and Lori Weiss. Combating Terrorism: Observations on National Strategies Related to Terrorism. GAO-03-519T. Washington, D.C.: March 3, 2003. Homeland Security: Challenges Facing the Coast Guard as it Transitions to the New Department. GAO-03-467T. Washington, D.C.: February 12, 2003. Weaknesses In Screening Entrants Into The United States. GAO-03-438T. Washington, D.C.: January 30, 2003. Major Management Challenges and Program Risks: Department of Homeland Security. GAO-03-102. Washington, D.C.: January 2003. Homeland Security: Management Challenges Facing Federal Leadership. GAO-03-260. Washington, D.C.: December 20, 2002. Homeland Security: Information Technology Funding and Associated Management Issues. GAO-03-250. Washington, D.C.: December 13, 2002. Border Security: Implications of Eliminating the Visa Waiver Program. GAO-03-38. Washington, D.C.: November 22, 2002. Homeland Security: INS Cannot Locate Many Aliens Because It Lacks Reliable Address Information. GAO-03-188. Washington, D.C.: November 21, 2002. Container Security: Current Efforts to Detect Nuclear Materials, New Initiatives, and Challenges. GAO-03-297T. New York, NY: November 18, 2002. Technology Assessment: Using Biometrics for Border Security. GAO-03- 174. Washington, D.C.: November 15, 2002. Coast Guard: Strategy Needed for Setting and Monitoring Levels of Effort for All Missions. GAO-03-155. Washington, D.C.: November 12, 2002. Border Security: Visa Process Should Be Strengthened as an Antiterrorism Tool. GAO-03-132NI. Washington, D.C.: October 21, 2002. Customs Service: Acquisition and Deployment of Radiation Detection Equipment. GAO-03-235T. Washington, D.C.: October 17, 2002. | One of the primary missions of the new Department of Homeland Security (DHS) focuses on border control--preventing the illegal entry of people and goods into the United States. Part of this mission is controlling the passage of travelers through official ports of entry into the United States. Facilitating the flow of people while preventing the illegal entry of travelers requires an effective and efficient process that authenticates a traveler's identity. Generally, identifying travelers at the ports of entry is performed by inspecting their travel documents, such as passports and visas, and asking them questions. Technologies called biometrics can automate the identification of individual travelers by one or more of their distinct physiological characteristics. Biometrics have been suggested as a way of improving the nation's ability to determine whether travelers are admissible to the United States. GAO found that biometric technologies are available today that can be used for border control. However, questions remain regarding the technical and operational effectiveness of biometric technologies in applications as large as border control. Before implementing any biometric border control system, a number of other issues would have to be considered, including the system's effect on existing border control procedures and people, the costs and benefits of the system, and the system's effect on privacy, convenience, and the economy. Furthermore, technology is only part of the solution. Effective security requires technology and people to work together to implement policies, processes, and procedures. At land border ports of entry, DHS faces several challenges including ensuring that the inspections process has sufficient integrity to enable inspectors to intercept those who should not enter our country, while still facilitating the entry of lawful travelers; ensuring that inspectors have the necessary technology, equipment, and training to do their job efficiently and effectively; and providing inspectors the access to necessary intelligence information. |
The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (P.L. No. 104-193) (PRWORA) made sweeping changes to national welfare policy. Principally, these reforms gave states the flexibility to design their own programs and the strategies necessary for achieving program goals, including how to move welfare recipients into the workforce. But because the act also changed the way in which federal funds for welfare programs flow to the states, most of the program’s fiscal risks also shifted to the states. PRWORA created the TANF block grant, a fixed federal funding stream that replaced the AFDC and related programs in which federal funding matched state spending and increased automatically with caseload. Under AFDC, which entitled eligible families to aid, the federal funding was largely open-ended so that if a state experienced caseload and related cost increases, federal funds would increase with state funds to cover expenditures for the entire caseload. This open-ended federal commitment provided that financing for every dollar spent on these programs was shared by the federal government and the states, thereby limiting the states’ exposure to escalating costs. In contrast, the TANF block grant eliminated the federal entitlement to aid. The federal government provides a fixed amount of funds regardless of any changes in state spending or the number of people the programs serve. While the states must also provide a fixed level of funds from their own resources—their maintenance of effort (MOE) —they are now responsible for meeting most of the costs associated with any increase in caseload on their own. How they plan to manage this fiscal risk is what I refer to in this testimony as contingency planning. In this new welfare partnership, it is tempting to suggest that since welfare reform devolved decisions regarding eligibility and program services to the states, the potential volatility of the caseload is no longer a federal concern. However, in light of both federal requirements and their own fiscal limitations, states will be challenged during a downturn to maintain or increase state funds for benefits when they are most needed. States’ decisions regarding who to serve, for how long, and with what services will surely depend on how much flexibility they have with the resources— state and federal—that are available to finance their welfare programs. Although considerable uncertainties exist about the impacts of downturns, the potential cyclical nature of program costs as well as the fiscal constraints states face in responding to hard times heightens the importance of fiscal planning. Helping states maintain their programs was indeed recognized as a federal interest by Congress when it included the Contingency Fund and Loan Fund—mechanisms for states to gain access to additional federal funds—in TANF. It is unclear what impact a major economic downturn or recession will have on welfare participation given the significant reforms in national welfare policy. Recent studies have tried to establish a link between caseload trends and certain macroeconomic indicators in part to determine how sensitive welfare programs might be to changes in the economy. While the research literature generally suggests that caseloads may very well increase in an economic downturn, there is substantial uncertainty regarding the extent of the impact. These studies point to the variety of other factors affecting caseload levels, particularly with the advent of welfare reform. For example, a 1999 Council of Economic Advisors (CEA) report suggests that a 1 percent increase in the unemployment rate could produce a 5 to 7 percent increase in welfare caseloads. However, this same study noted that changes in family structure and welfare policies can significantly mitigate the impact of an economic downturn on caseloads. In fact, the recent caseload drop was at least partly due to reforms ushered in by TANF—the study suggests that about one-third of the caseload reduction from 1996 through 1998—independent of the strong economy. Just as the reforms may have prompted reduced caseloads during times of economic expansion, greater emphases on work implies a tighter link to work and hence the economy, making TANF more sensitive to an economic downturn than AFDC. On the other hand, the reforms may pose significant disincentives for people to return to the welfare rolls or to apply even if they are eligible during downturns. For example, PRWORA imposes a 5-year lifetime limit on federal assistance on individuals receiving on going assistance; many may try other options first before returning to the welfare rolls. In addition, many states now offer a variety of work supports such as child care, transportation subsidies, and an earned income tax credit (EITC) to families not receiving cash assistance. These supports may be enough to allow earnings from even a part-time job to support a family without returning to the cash assistance rolls. Budgetary stress caused by caseload volatility may be compounded by the limitations placed on most states by constitutional or statutory requirements to balance their general fund budgets. During a fiscal crisis, state policymakers face difficult choices regarding whom to serve, for how long, and with what services. But more important to the discussion today is that each of these “hard choices” must be financed in the context of fiscal limitations—including legislative restrictions, constitutional balanced budget mandates, or conditions imposed by the bond market— on state’s ability to increase spending, especially in times of fiscal stress. For example, revenues may come in lower than expected during an economic downturn and a state’s enacted budget can fall into deficit. State balanced budget requirements often motivate states to both reallocate resources within their budgets and cut program spending or increase taxes during recessions. Such difficulties, I am sure, come as no surprise to many of the members of this Subcommittee who have had to make many of these difficult choices while serving in state legislative bodies. For these reasons prudent fiscal planning, especially contingency budgeting for a fiscal “rainy day,” becomes particularly important. In a fiscal crisis, a state’s need to cut spending or increase revenues can be alleviated if it has accumulated surplus balances in rainy day funds—these surpluses may be used to cover a given year’s deficit. However, unless there are reserves specifically earmarked for low-income families, welfare programs will have to compete with other state priorities for any of the rainy day funds. Finding the right balance between saving and investing resources in programs that help people make the transition from welfare to work continues to be one of the main challenges for states as they develop strategies to address the needs of low-income families. To set aside reserves for future welfare costs, states have two options: they can save federal TANF funds and/or they can save their own funds. However, states noted significant disincentives to save associated with both of these options. State officials told us that there is concern that accumulating unspent TANF balances might signal that the funds are not needed and that they have been under considerable pressure to spend their TANF balances more quickly to avoid the accumulation of large unspent balances in the U.S. Treasury. States have accumulated a portion of their own funds in general purpose rainy day funds, but welfare would have to compete with other claims for these dollars when these dollars are released from state treasuries. Under TANF, the amount of each state’s block grant was based on the amount of federal AFDC funds spent by the state when caseloads and spending were at historic highs. Because caseloads have fallen so dramatically, generally states have been able to reap the fiscal benefits of welfare reform by parlaying abundant federal resources into new programs and savings. Any federal funds they choose to reserve must remain at the U.S. Treasury until the states need them for low-income families. As of September 30, 2000 states reported leaving $9 billion in unspent TANF funds at the U.S. Treasury; this amounts to 14.5 percent of the total TANF funds awarded since 1996. Although many might view these balances as a de facto rainy day fund for future welfare costs, in fact there is probably less here than meets the eye. First, as we will discuss in more detail, the data reported by the states is misleading. Second, the reported balances themselves vary greatly among the states, suggesting that some states may not be as prepared to address the fiscal effects an economic downturn may have on their welfare programs without additional federal assistance while others may have saved substantially more than they might need. For example, some states report spending all their federal funds—essentially holding nothing in reserve—while others report accumulated reserves totaling more than their annual block grants. For example, Wyoming reports that nearly 70 percent of the TANF funds it has been awarded since 1997 remain unspent whereas Connecticut reports spending all of its TANF funds. States do not report unspent balances in a consistent manner making it difficult to ascertain how much of these balances is truly uncommitted and available for future contingencies. Therefore, federal policymakers lack reliable information to help assess states’ plans for economic contingencies, whether the levels of available funds are adequate, and whether all states have access to these funds. Department of Health and Human Services’ (HHS) regulations require that if a state has allocated a portion of its TANF grant to a rainy day fund, the state should report these balances as unobligated. But, state rainy day funds for welfare programs represent only a portion of the total reported unobligated balances. These balances can represent funds the state has saved for a rainy day, funds for which the state has made no spending plans, or funds the state has committed for activities in future years. For example, in developing a budget for a new child care program, officials in Wisconsin assumed that once the program was fully subscribed it would require all available resources—including any unobligated TANF funds from previous fiscal years. State officials said that even though at the end of federal fiscal year 2000 the state reported $40 million TANF funds as unobligated, the state has programmed these funds to pay child care subsidies to low-income families in future reporting periods. This is a case where a reported unobligated balance provides very little information about whether these funds are committed or simply unbudgeted. States also report unspent TANF funds as unliquidated obligations, which means that, to varying degrees, an underlying commitment exists for the funds either through a contract for services for eligible clients or to a county for expenses it will incur in operating a county-administered welfare program. But it is unclear how much of what is currently obligated is committed for future needs. For example, both California and Colorado have county-administered welfare systems. These states pass most of their annual block grant directly to the counties. As caseloads have continued to decline in both states, the budgets over-estimated expenditures leaving considerable balances unspent. Although these funds remain in the U.S. Treasury until a county needs to spend them, they remain as unliquidated obligations committed to the counties. California reports that it has over $1.6 billion in unliquidated TANF obligations. But the state reports no unobligated balances, implying that all these funds are earmarked. Recently, California amended its state statute to allow the state to deobligate some of these funds, if necessary, and make them available to other counties. Likewise, as of September 30, 2000 Colorado reports about $95 million in unliquidated obligations, but passes virtually all TANF resources to the counties. As of June 30, 2000 the state estimated that counties hold about $67 million in reserves—or about 70 percent of the total unliquidated obligations—for future contingencies. As highlighted in the above examples, the difference between unobligated balances and unliquidated obligations is often unclear and varies by state. Significant portions of California’s and Colorado’s unspent funds are not yet actually committed for specific expenditures but these facts cannot be determined based on the aggregate data, in part because of the way HHS requires states to report funds. Reporting a significant share of their unspent balances as unliquidated obligations implies that there is an underlying commitment on these funds when, in fact, these funds are no more committed than the funds Wisconsin must report in its unobligated balances but which are budgeted for expected outlays in Wisconsin’s child care subsidy program. Even though some states might consider their unobligated balances for TANF to be rainy day funds, it does not appear that the amounts reserved were based on any kind of contingency planning or analysis. For example, 5 of the 10 states we studied told us that they consider a portion of the funds left at the U.S. Treasury to be rainy day funds for unanticipated program needs. But the levels of the reserves established in those five states were not determined through a fiscal planning process that reflects budgetary assumptions about projected future needs. Instead, these states’ statutes merely designate all TANF funds not already appropriated by the state legislature for other purposes as constituting the state’s welfare rainy day fund, a method that clearly is not based on anticipated needs or contingencies. The lack of transparency regarding states’ plans for their unspent TANF funds prompted us, in 1998, to recommend that HHS and the states work together to explore options for enhancing the information available regarding these balances. Although HHS, the National Governor’s Association (NGA), and the National Conference of State Legislatures (NCSL) all agreed with us that more information regarding unspent TANF balances would be useful, little progress has been made implementing this recommendation and HHS’ final regulations, issued on April 12, 1999 did not address this issue. States were already concerned that the TANF reporting requirements would pose a substantial burden on state program administration and argued that adding another reporting requirement to allow states to signal their intentions for their unspent balances would only add to those burdens. However, the lack of useful information on these balances continues to weaken the effectiveness of congressional oversight over TANF funding issues, including how well prepared states may be to address a fiscal downturn. Our 1998 recommendation proposed a strategy that state and federal officials had tried before and found to be successful. In 1981, a number of categorical grants were block granted to states to provide maximum flexibility in developing and managing programs, along the same lines that TANF was designed in 1996. However, due to variations in the way states reported information to the federal government on activities funded by some of these block grants, Congress had no national picture of the grants’ impact. States and some national organizations recognized that these aggregate data were important and developed their own strategies to collect the data. We found that a cooperative data collection approach was easier to implement when (1) there was federal funding to support data collection activities, (2) national-level staff worked with state officials, and (3) state officials helped in systems design. We continue to believe that better information on the status of these unspent balances is crucial to effective oversight and could even enhance states’ incentives to save some of their TANF funds. Absent credible information on balances, there may be a greater risk that Congress could take action to recoup TANF funds—a prospect that has prompted some states to draw down and spend their TANF funds rather than leave them in the Treasury. Although many states have healthy general rainy day funds from which all programs would compete for funds during times of fiscal stress, only one of the states in our review, Maryland, has earmarked state funds in a reserve specifically for contingencies in its welfare program. Setting aside state funds in reserve for welfare requires tradeoffs for state decisionmakers among competing needs for the funds during a downturn. In addition, any funds a state sets aside for future welfare contingencies cannot count toward a states’ maintenance of effort in the year they are reserved—in order to qualify as MOE, the funds must be spent. Therefore, it is a very expensive proposition indeed for a state to budget both for a welfare reserve and to meet its MOE because it then would have far fewer resources available to finance other state priorities. Maryland found a way to transfer the costs of saving state funds to the federal government. In state fiscal year 2001, the state identified nine program accounts with annual expenditures of state funds totaling about $30 million that, under the broad and flexible rules governing TANF expenditures, could be funded with federal funds. In developing the budget, the state replaced these state funds with federal funds. Instead of using the “freed-up” state funds for nonwelfare activities the state used them to establish a dedicated reserve for its welfare program. While the ability to carry forward TANF balances is likely viewed as the principle mechanism by which states can prepare for a rainy day, PRWORA also created two safety-net mechanisms for states to access additional federal resources in the event of a recession or other emergency—the $2 billion Contingency Fund for State Welfare Programs (Contingency Fund) and the $1.7 billion Federal Loan Fund for State Welfare Programs (Loan Fund). The Contingency Fund is authorized through 2001, at which time it expires. The President’s fiscal year 2002 budget proposal did not include a request to reauthorize the Contingency Fund. Because of a provision in the Adoption and Safe Families Act of 1997 that reduced the TANF Contingency Fund by $40 million, the current balance in the Contingency Fund is $1.96 billion. States are deemed “needy” and eligible to receive funds from the Contingency Fund if they trigger one of two criteria: (1) the state’s unemployment rate exceeds 6.5 percent for 3 months and is equal to at least 110 percent of its rate in the same period of the previous year or (2) its average monthly food stamp caseload for the most recent 3-month period is equal to at least 110 percent of the average monthly caseload from the same 3-month period in fiscal year 1994 or 1995. Once eligible, a state must certify that it has increased its own current spending to prewelfare reform levels before it can gain access to the fund. Requiring states to increase their own financial stake in their welfare programs before giving them additional federal funds is, in principle, a reasonable approach that seeks to balance both the federal government’s interest in ensuring that states in trouble have access to additional funds and its interest in ensuring that states have done everything possible to address the shortfalls before turning to the federal treasury. Not only does the statute require states to bring their spending up to the prewelfare reform levels at a time when states are experiencing fiscal stress, but PRWORA establishes a different and more challenging base for the Contingency Fund’s MOE. While a state’s MOE requirement under the basic TANF program can include state funds expended under certain state programs and child care expenditures, the MOE requirement for the Contingency Fund does not include these items. Because states spend a significant share of their MOE funds on activities that do not qualify as Contingency Fund MOE expenditures, state budget officials told us that, rather than shifting their spending priorities to meet the Contingency Fund MOE, they would find other ways to manage deficits in their TANF budgets before they could consider turning to the Contingency Fund. In 1997 eight states qualified for contingency funds.However, only two states requested and were awarded contingency funds—North Carolina and New Mexico. In the end, only New Mexico complied with the Fund’s requirements and accepted $2 million. No state has used the Fund since 1997. Equally important as the requirement that states raise their own financial commitment in order to gain access to additional federal funds is a requirement that states share in all additional program costs—even beyond the MOE requirements. Requiring a match encourages states to be more cost-conscious than if the costs of an expanding caseload were covered only with federal dollars. While the Contingency Fund requires states to match all federal dollars at the states’ federal medical assistance percentage (FMAP) rate the statute goes a step further. The statute limits the monthly draws to one-twelfth of 20 percent of a state’s annual block grant. This limitation requires a complex annual reconciliation process to certify that the state meets its matching requirement but also that it did not receive more than its monthly proportional share of contingency funds (see figure 1). Prorating a state’s draws from the Contingency Fund— especially if the state qualifies for a period that spans two federal fiscal years—reduces the share of federal funds to which it is entitled. This effectively increases the matching requirement (even higher than required under AFDC), thus raising the state’s costs for gaining access to the funds. Unlike the Contingency Fund, the Loan Fund does not have triggers. Instead, states that have not incurred penalties for improper use of TANF funds are eligible for loans from the Loan Fund. Such loans are to have a maturity of no more than 3 years at an interest rate comparable to the current average market yield on outstanding marketable obligations of the U.S. Treasury with comparable maturities. Some state officials told us that they are eligible for better financing terms in the tax-exempt municipal bond market. More important, officials in some states indicated that borrowing specifically for social welfare programs in times of fiscal stress would not receive popular support. In summary, neither the Contingency Fund—as currently designed—nor the Loan Fund is likely to be used by states in a fiscal crisis to obtain more resources for their welfare programs. The Loan Fund is most likely the wrong mechanism to provide assistance to states in a fiscal crisis. However, if the Contingency Fund is reauthorized, Congress could also contemplate improvements to enhance its usefulness in addressing budgetary shortfalls in states’ welfare programs that, at the same time, could provide stronger incentives for states to save for a rainy day. Although PRWORA struck a new fiscal balance between the federal government and the states in terms of welfare spending, both the states and the federal government have a significant interest in preparing the program to meet challenges in times of fiscal distress. Contingency planning is about being prepared for the unknown—as the economy shows possible signs of weakening, we need to begin to think about how prepared we are to maintain this important aspect of the nation’s safety net. Although many view the states’ large unspent TANF balances as the de facto contingency fund, these balances vary across states; this implies that some states may be better prepared for a recession than others. More important, current reporting requirements do not give us reliable, consistent information regarding states’ actual plans for these monies. According to NGA, few states have engaged in a systematic fiscal planning process to project their needs under a variety of economic scenarios. While we don’t know how states’ welfare programs will respond to a weakened economy, we know both the federal government and the states have a responsibility to ensure the viability of TANF in good times and bad. Before addressing how contingency planning can be improved for the future, the federal government needs better information on states’ current plans. At the same time, Congress could consider ways to both strengthen federal contingency mechanisms and give states greater incentives to save for the future. In 1998, we recommended that the Secretary of Health and Human Services explore with the states various options to enhance information regarding states’ plans for their unused TANF balances. We said that such information could include explicit state plans for setting aside TANF-funded reserves for the future, provide more transparency regarding these funds and enhance provide states with an opportunity to more explicitly consider their long- term fiscal plans for TANF. Although HHS concurred with our recommendation, to date, we have seen no progress in this area. We continue to believe that Congress would benefit from more complete information on states’ plans for future contingencies, including unspent TANF balances. While states often face burdens with respect to federal financial reporting requirements, states have historically recognized the benefits of cooperative data collection and reporting efforts and worked successfully with federal agencies to collect data that can give oversight officials a broad, national perspective of how they are using federal block grant funds. Allowing for more transparency regarding states’ fiscal plans for TANF funds could enhance congressional oversight over the multi-year timeframe of the grant and provide states with an opportunity to more explicitly consider their long- term fiscal plans for the program. While the opportunity to more clearly signal their intentions for these funds could prompt states to save, Congress must have some assurance that states’ estimates of their contingency needs were developed using credible, realistic estimating procedures. In order for a state to report to the federal government a balance in a rainy day fund, and in order for the federal government to have some level of confidence in such a figure, the federal government could give states guidance on how it could designate its TANF balances as a valid rainy day fund. Such guidance could include requirements that a state rainy day fund (1) include criteria both for estimating the appropriate reserve balances and for releasing funds and (2) be auditable. This guidance could help states signal that much of these balances are, in fact, committed. Furthermore, requiring that reserves be determined by credible, transparent estimating procedures would help provide better estimates of the potential need for federal contingency funds. The Contingency Fund, as currently designed, has not proven to be an inviting option to the states that have actually experienced fiscal stress to date. Should Congress decide to reauthorize the Contingency Fund, consideration could be given to approaches that could both improve the usefulness of the fund for hard-pressed states as well as ensure that states contribute their fair share to future welfare costs. Such approaches could include (1) eliminating the more restrictive the Contingency Fund-MOE and substituting the more flexible basic TANF-MOE and (2) eliminating the Monthly Payment Limitation (MPL) on the amount of contingency funds to which each state has access. These actions could help strengthen the role of the Contingency Fund in state contingency budgeting. Realigning the MOE and eliminating the MPL would make the Contingency Fund more accessible and, therefore, more responsive. If states had better access to federal contingency funds, they might be more likely to use the money when needed. However, greater accessibility must be balanced by fiscal responsibility. It is important to be mindful of this balance so as not to make it too easy for states to access federal contingency funds because they might be less likely to save for a rainy day on their own, which could pose risks to the federal Treasury. The changes discussed above would still require states to increase their own spending to pre-TANF levels (i.e., meet a 100 percent MOE) to gain access to the Contingency Fund—a higher level than they must maintain for the regular TANF program—as well as provide a matching share for the additional federal funds. By broadening the fiscal base that states can draw upon to meet this higher MOE, these changes might not only make the fund more accessible in times of need but prompt states to save their own funds in anticipation of accessing the federal funds. There are other options that could strengthen states’ incentives to save. For example, Congress could (1) allow states to count rainy day funds towards their MOE and (2) allow states to draw down their entire TANF grant and save these funds in their own treasuries. Allowing states to count rainy day funds towards their MOE would give them a greater incentive to save. However, “maintenance of effort” implies an actual expenditure, and is a critical aspect of PRWORA. If states save their own funds instead of spending them, they might be more likely to draw down all of their TANF dollars now to replace the state dollars they save for the future. However, this outcome can be mitigated by limiting the amount of rainy day funds that states could count towards their MOE. In addition, as we suggested earlier when discussing the TANF balances saved by states, states could also be required to certify that state rainy day funds are in fact auditable and include criteria for estimating and releasing the funds. Some state officials have argued that their incentive to save TANF funds for the future could be bolstered by allowing states to keep unspent TANF funds in their own accounts rather than at the U.S. Treasury. They believe that this might reduce incentives for Congress to rescind unspent balances since the outlays would be recognized earlier at the time of the grant award, not when the money is actually spent for a program need. State officials also told us that this would alleviate the perceived pressure to spend TANF funds rather than save them. However, it is important to note that, regardless of where these federal funds are “stored,” states are accountable for these funds. As such, Congress still needs consistent, reliable, and auditable information on these funds. There are significant issues associated with this proposal. First, if states draw down all unspent balances in the current year, the rate of outlays recorded for the TANF program would shift forward. Accordingly, the federal budget surplus would be proportionately lower in the near term. Second, the federal government would incur interest costs while states could realize interest earnings. The Cash Management Improvement Act of 1990 (CMIA) helps ensure that neither the states nor the federal government incur unnecessary interest costs or forgo interest income in the course of federal grant disbursement by prohibiting states from drawing down funds until they are needed. If Congress permitted, notwithstanding CMIA, states to draw down their TANF balances to establish reserves, it could also require states to reimburse the U.S. Treasury for any interest they earn on the drawdowns. This would maintain the spirit of the CMIA by preserving fiscal neutrality for the federal government and the states, since the states could use interest earnings they gain on investing the drawdowns to reimburse the Treasury. Essentially, states would have to justify why TANF deserves an exemption from a governmentwide grant policy that settled years of intergovernmental conflicts between federal and state administrators. The permanent nature of the appropriation to each state as well as the significant devolution of responsibilities to states for addressing the program’s fiscal risks may argue for such a change, but other federal interests would have to be weighed as well. For example, some may argue that CMIA promotes transparency by ensuring that states’ unspent balances remain in the federal Treasury rather than in state treasuries. This concern could be addressed through federal reporting on states’ expenditures and reserves. In conclusion, the TANF program has established a new fiscal partnership that has supported the transition to work-based welfare reforms. Because the partnership has yet to be tested in times of fiscal stress, now is the time for both federal and state governments to consider actions to prepare for more uncertain times and the possibility of higher program costs. Although TANF currently contains certain mechanisms to provide a fiscal cushion, the options we have presented provide an opportunity to promote greater assurance that all states will be poised to respond to future fiscal contingencies affecting their TANF programs. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. | This testimony discusses states' plans for operating their Temporary Assistance for Needy Families (TANF) programs in the event of an economic downturn. GAO found that the data available on the levels and adequacy of states' reserves is insufficient and misleading. Furthermore, most states have done little planning for economic contingencies. Many states cite obstacles to saving money for possible economic downturns. Although TANF funds can be set aside in a budgetary reserve, state officials said that they are concerned that the accumulation of unspent TANF funds might signal that the funds are not needed. Another option for states would be to save their own funds in a general purpose rainy day account, but state officials said that welfare would have to compete with other state priorities when these funds are released from state treasuries. There are now federal contingency mechanisms for states to access additional federal resources in the event of a recession or other emergency--the Contingency Fund for State Welfare Programs and the Federal Loan for State Welfare Programs. However, states generally found these programs too complex and restrictive, and would most likely find other ways to sustain their welfare programs. |
In April 2017, we issued our latest report on NNSA’s 25-year plans to modernize the nation’s nuclear weapons stockpile and its supporting infrastructure. In this report, we identified two areas of misalignment between NNSA’s modernization plans and the estimated budgetary resources needed to carry out those plans, which could result in challenges to NNSA in affording its planned portfolio of modernization programs. First, we found that NNSA’s estimates of funding needed for its modernization plans sometimes exceeded the budgetary projections included in the President’s planned near- and long-term modernization budgets. In the near-term (fiscal years 2018 through 2021), we found that NNSA may have to defer certain modernization work beyond that time period in order to execute its program within the planned budget, which could increase modernization costs and schedule risks. This is a pattern we have previously identified as a “bow wave”—an increase in future years’ estimated budget needs that occurs when agencies are undertaking more programs than their resources can support. In the long- term (fiscal years 2022 through 2026), we found that NNSA’s modernization program budget estimates sometimes exceeded the projected budgetary resources planned for inclusion in the President’s budget, raising additional questions about whether NNSA will be able to afford the scope of its modernization program. Second, the costs of some major modernization programs—such as for nuclear weapon refurbishments—may also increase and further strain future modernization budgets. As we reported in April 2017, NNSA estimates of funding needed for its modernization plans sometimes exceeded the budgetary projections included in the President’s planned near- and long-term modernization budgets. We found that NNSA may have to defer certain modernization work planned for fiscal years 2018 through 2021 beyond its current 5-year planning period, called the Future-Years Nuclear Security Program (FYNSP). As we reported in April 2017, this is caused by a misalignment between NNSA’s budget estimates for certain nuclear modernization programs and the President’s budgets for that period. We concluded that this deferral could exacerbate a significant bow wave of modernization funding needs that NNSA projects for the out-years beyond the FYNSP and could potentially increase modernization costs and schedule risks. As we have previously reported, such bow waves occur when agencies defer costs of their programs to the future, beyond their programming periods, and they often occur when agencies are undertaking more programs than their resources can support. As NNSA’s fiscal year 2017 budget materials show, its modernization budget estimates for fiscal years 2022 through 2026—the first 5 years beyond the FYNSP—may require significant funding increases. For example, in fiscal year 2022, NNSA’s estimates of its modernization budget needs are projected to rise about 7 percent compared with the budget estimates for fiscal year 2021, the last year of the FYNSP, as shown in figure 1. The analysis in our April 2017 report showed that NNSA has shifted this modernization bow wave to the period beyond the FYNSP time frame in each of the past four versions of the annual Stockpile Stewardship and Management Plan. For example, in the Fiscal Year 2014 Stockpile Stewardship and Management Plan, NNSA’s budget estimates for its modernization programs increased from a total of about $9.3 billion in fiscal year 2018, the last year of the FYNSP, to about $10.5 billion in fiscal year 2019, the first year after the FYNSP—an increase of about 13 percent. Similar patterns showing a jump in funding needs immediately after the last year of the FYNSP are repeated in the funding profiles contained in the fiscal year 2015, 2016, and 2017 plans. As we have previously reported, deferring more work to future years can increase cost and schedule risks and can put programs in the position of potentially facing a backlog of deferred work that grows beyond what can be accommodated in future years. The Fiscal Year 2017 Stockpile Stewardship and Management Plan shows that NNSA’s overall modernization budget estimates for fiscal years 2022 through 2026—the out-years beyond the FYNSP—may exceed the projected funding levels in the President’s budgets for that time period, raising further questions about the affordability of NNSA’s nuclear modernization plans. According to NNSA’s data, the agency’s estimated budget needed to support modernization totals about $58.4 billion for fiscal years 2022 through 2026, and the out-year funding projections contained in the President’s fiscal year 2017 budget for the same period total about $55.5 billion. The President’s out-year funding projections, therefore, are approximately $2.9 billion, or about 5.2 percent, less than NNSA estimates it will need over the same time period. Despite this potential shortfall, NNSA’s Fiscal Year 2017 Stockpile Stewardship and Management Plan concludes that the modernization program is generally affordable in the years beyond the FYNSP for two reasons. First, the President’s out-year funding projections are sufficient to support NNSA’s low-range cost estimates for its modernization programs for fiscal years 2022 through 2026. Based on NNSA data, the low-range cost estimates for fiscal years 2022 through 2026 total approximately $54.4 billion and the President’s out-year funding projections total about $55.5 billion. Figure 2 illustrates data from the 2017 plan showing NNSA’s nominal budget estimates, including high- and low-range cost estimates for its modernization program, along with the out-year funding projections from the President’s fiscal year 2017 budget, for fiscal years 2022 to 2026. Second, NNSA concludes that its modernization programs are generally affordable beyond the FYNSP because the agency’s estimated modernization budget needs will begin to decrease in fiscal year 2027. In our April 2017 report, we noted that NNSA’s conclusion—that its modernization program is affordable because the President’s out-year funding projections fall within NNSA’s modernization cost ranges—is overly optimistic. This is because NNSA’s conclusion is predicated on optimistic assumptions regarding the cost of the modernization program beyond the FYNSP, particularly for fiscal years 2022 through 2026. For the program to be affordable, NNSA’s modernization programs would need to be collectively executed at the low end of their estimated cost ranges. The plan does not discuss any options NNSA would pursue to support or modify its modernization program if costs exceeded its low- range cost estimates. In addition, the Fiscal Year 2017 Stockpile Stewardship and Management Plan states that the nominal cost of NNSA’s modernization program is expected to decrease by approximately $1 billion in fiscal year 2027. In that year, according to the 2017 plan, it is anticipated that NNSA’s estimated budgets for its modernization program will begin to fall in line with projections of future presidential budgets. However, as we noted in our April 2017 report, the decrease that NNSA anticipates in its modernization funding needs beginning in fiscal year 2027 may not be achievable if the projected mismatch between NNSA’s estimates of its modernization budget needs and the projections of the President’s modernization budget for fiscal years 2022 through 2026 is not resolved. This mismatch creates concerns that NNSA will not be able to afford planned modernization costs during fiscal years 2022 through 2026 and will be forced to defer them to fiscal year 2027 and beyond, continuing the bow wave patterns discussed above. Our April 2017 report identified misalignment between NNSA’s estimate of its budget needs and NNSA’s internal cost range estimates for several of its major modernization programs. Further, we found that the costs of some major life extension programs (LEPs) may increase in the future, which may further strain NNSA’s planned modernization budgets. With respect to the alignment of NNSA’s estimate of its budget needs and NNSA’s internal cost range estimates, we found that NNSA’s budget estimates were generally consistent with NNSA’s high- and low-range cost estimates. However, for some years, NNSA’s low-range cost estimates exceeded the budget estimates for some of the programs, suggesting the potential for a funding shortfall for those programs in those years. Specifically, we found that the low-range cost estimates for the W88 Alteration 370 program and all LEPs discussed in our April 2017 report exceeded their budget estimates for some fiscal years within the 10-year time period from fiscal year 2017 to 2026. As we reported in the 2013 and 2016, this misalignment indicates that NNSA’s estimated budgets may not be sufficient to fully execute program plans and that NNSA may need to increase funding for these programs in the future. Additionally, we found that the costs of two ongoing nuclear weapon LEPs and the W88 Alteration 370 program may increase in the future, based on NNSA information that was produced after the release of the fiscal year 2017 budget materials. These potential cost increases could further challenge the extent to which NNSA’s budget estimates support the scope of modernization efforts. The LEPs facing potential cost increases include: B61-12 LEP. An independent cost estimate for the program completed in October 2016 exceeded the program’s self-conducted cost estimate (conducted in June 2016) by $2.6 billion. We are conducting ongoing work to determine how NNSA has, if at all, reconciled this difference. W80-4 LEP. Officials from NNSA’s Office of Cost Policy and Analysis told us that this program may be underfunded by at least $1 billion to meet the program’s existing schedule. W88 Alteration 370. According to officials from NNSA’s Office of Cost Policy and Analysis, this program’s expanded scope of work may result in about $1 billion in additional costs. To help NNSA put forth more credible modernization plans, we recommended in our April 2017 report that the NNSA Administrator include an assessment of the affordability of NNSA’s portfolio of modernization programs in future versions of the Stockpile Stewardship and Management Plan, such as by presenting options (e.g., potentially deferring the start of or canceling specific modernization programs) that NNSA could consider taking to bring its estimates of modernization funding needs into alignment with potential future budgets. In commenting on our report, NNSA neither agreed nor disagreed with our recommendation. The Secretary of Energy has taken several important steps that demonstrate DOE’s commitment to improving contract and project management. In our recent reports, we have noted progress as DOE has developed and implemented corrective actions to identify and address root causes of persistent project management challenges, as well as progress in the department’s monitoring of the effectiveness and sustainability of corrective actions. However, DOE’s recent efforts have not fully addressed several areas where the department continues to have shortcomings. As we noted in our 2017 high risk report, DOE has taken several important steps that demonstrate its commitment to improving project management—steps that have been supported by senior leadership within the department. Specifically, based in part on our December 2014 recommendation, DOE issued a revised project management order, DOE Order 413.3B, in May 2016 and added the following requirements for its program offices: Develop cost estimates in accordance with industry best practices. Conduct analyses of alternatives for projects consistent with industry best practices and independent of the contractor organization responsible for managing the construction or constructing a capital asset project. Ensure that major projects’ designs and technologies are sufficiently mature before contractors are allowed to begin construction. Conduct a root cause analysis if a major project is expected to exceed its approved cost or schedule. DOE also made significant efforts to monitor the effectiveness and sustainability of corrective actions address project management challenges. For example, the Secretary strengthened the Energy Systems Acquisition Advisory Board by changing it from an ad hoc body to an institutionalized board responsible for reviewing all capital asset projects with a total project cost of $100 million or more. The Secretary also created the Project Management Risk Committee, which includes senior DOE officials and is chaired by a new departmental position—the Chief Risk Officer. The committee is chartered to assess the risks of projects across DOE and advise DOE senior leaders on cost, schedule, and technical issues for projects. Although DOE has taken these important actions, it is too early to tell whether front-end planning problems persist. DOE has previously acknowledged its longstanding problems with front-end planning, stating that insufficient front-end planning has consistently contributed to DOE projects not finishing on budget or schedule. Our recent work also indicates that continued senior-level attention on front-end planning may be warranted. In August 2016, we found problems with DOE’s front-end project planning at the Waste Isolation Pilot Plant (WIPP) for the new permanent ventilation system. This system is being built to enable DOE to resume full operations of the geological nuclear waste repository, which were suspended after a radiological release accident in February 2014. DOE did not follow all best practices in analyzing and selecting an alternative for the new ventilation system at WIPP, which DOE estimated will cost between $270 million and $398 million to build and will be completed by the end of March 2021. For example, DOE did not select the preferred alternative based on assessing the difference between the life-cycle costs and benefits of each alternative, as called for by best practices and now required by DOE’s revised project management order. We recommended that DOE require projects, including the WIPP ventilation system, to implement recommendations from independent analysis of alternatives reviews or document the reasons for not doing so. DOE concurred with the recommendation and planned to incorporate guidance in its updated project review guide on how DOE offices should address recommendations from independent reviews. In August 2016, we found that DOE did not follow project management requirements in its front-end planning for an alternative to the Chemistry and Metallurgy Research Replacement (CMRR) project. After spending $450 million designing the project, NNSA reversed its decision to build a large nuclear facility because of projected excessive cost growth. Instead, NNSA revised the CMRR project to use existing and smaller new facilities. We found that NNSA did not define key parameters for one aspect of the new project, including the capacity for analyzing plutonium that the project should provide, as directed by NNSA policy. We made several recommendations, including that NNSA identify the capacity for analyzing plutonium for the revised CMRR project. NNSA neither agreed nor disagreed with the recommendations. DOE’s recent efforts do not address several areas where it continues to have shortcomings including (1) acquisition planning for its major contracts, (2) the quality of enterprise-wide cost information available to DOE managers and key stakeholders, (3) DOE’s need for a program management policy, (4) how DOE’s new project management requirements will be applied to its major legacy projects, and (5) whistleblower protections. During the acquisition planning phase for contracts, critical contract decisions are made that have significant implications for the cost and overall success of an acquisition. In August 2016, we examined DOE’s use of management and operating (M&O) contracts. We found that DOE did not consider acquisition alternatives beyond continuing its longstanding M&O contract approach for 16 of its 22 M&O contracts. We concluded that without considering broader alternatives in the acquisition planning phase, DOE cannot ensure that it is selecting the most effective scope and form of contract, raising risks for both contract cost and performance. The size and duration of DOE’s M&O contracts—22 M&O contracts with an average potential duration of 17 years, representing almost three- quarters of DOE’s spending in fiscal year 2015—underscore the importance of planning for every M&O acquisition. According to DOE officials, one of the primary reasons DOE uses this type of contract is because it is less burdensome to manage. According to DOE officials, such contracts are easier to manage with fewer DOE personnel because they are less frequently competed and have broadly written scopes of work, among other attributes. Moreover, a 2013 study found that, on average, each NNSA M&O procurement employee was associated with about $287 million in contract spending, compared with a federal government average of $9 million per procurement employee. We made two recommendations in that report, including that DOE establish a process to analyze and apply its experience with contracting alternatives. DOE generally concurred with our recommendations. The effectiveness of DOE’s monitoring of its contracts, projects, and programs depends upon the availability of reliable enterprise-wide information on which to base oversight activities. For example, reliable enterprise-wide cost information is needed to identify the cost of activities, ensure the validity of cost estimates, and provide information to Congress to make budgetary decisions. However, meaningful cost analyses across programs, contractors, and sites are not possible because NNSA’s contractors use different methods of accounting for and tracking costs. NNSA developed a plan to improve and integrate its cost reporting structures; however, we found in our January 2017 report that this plan did not provide a useful road map for guiding NNSA’s effort. For example, NNSA did not define strategies and identify resources needed to achieve its goals, which is a leading practice for strategic planning. NNSA’s plan contained few details on the elements it must include, such as its feasibility assessment, estimated costs, expected results, and an implementation timeline. We concluded that, until a plan is in place that incorporates leading strategic planning practices, NNSA cannot be assured that its efforts will result in a cost collection tool that produces reliable enterprise-wide cost information that satisfies the information needs of Congress and program managers. We recommended that NNSA develop a plan for producing cost information that fully incorporates leading planning practices. NNSA agreed with our recommendation. In addition, quality data is needed for DOE to manage its risk of fraud. The Fraud Reduction and Data Analytics Act of 2015 establishes requirements aimed at improving federal agencies’ controls and procedures for assessing and mitigating fraud risks through the use of data analytics. In our March 2017 report, however, we found that because DOE does not require its contractors to maintain sufficiently detailed transaction-level cost data that are reconcilable with amounts charged to DOE, it is not well positioned to employ data analytics as a fraud detection tool. The data were not suitable either because they were not for a complete universe of transactions that was reconcilable with amounts billed to DOE or because they were not sufficiently detailed to determine the nature of costs charged to DOE. We concluded that, without requiring contractors to maintain such data, DOE will not be well positioned to meet the requirements of the Fraud Reduction and Data Analytics Act of 2015 and manage its risk of fraud and other improper payments. We recommended that DOE require contractors to maintain sufficiently detailed transaction-level cost data that are reconcilable with amounts charged to the government. DOE did not concur with our recommendation. Specifically, DOE stated that the recommendation establishes agency-specific requirements for DOE contractors that are more prescriptive than current federal requirements and that its M&O contractors, not DOE, are responsible for performing data analytics and determining what data are needed to do so. We are concerned that DOE’s response demonstrates that it does not fully appreciate its responsibility for overseeing contractor costs. We continue to believe that the use of data-analytic techniques by DOE employees could help mitigate some of the challenges that limit the effectiveness of DOE’s approach for overseeing M&O contractor costs. However, effectively applying data-analytics is dependent upon the availability of complete and sufficiently detailed contractor data. Therefore, we continue to believe that DOE needs to implement our recommendation and require contractors to maintain sufficiently detailed transaction-level cost data that are reconcilable with amounts charged to the government. Program management can help ensure that a group of related projects and activities are managed in a coordinated way to obtain benefits not available from managing them individually. This approach helps federal agencies get what they need, at the right time, and at a reasonable price. However, in 2016 we found that DOE had not established a department- wide program management policy and that DOE had not established a career development program for program managers. Specifically, In an August 2016 report examining NNSA’s plans to build the CMRR, we found that the agency had not clarified whether the project would satisfy the mission needs of other NNSA and DOE programs. NNSA might have been better able to clarify this project’s mission needs if DOE and NNSA had been operating under a DOE-wide program management policy incorporating leading practices. DOE and NNSA officials said they recognize the importance of establishing a program management policy, but at the time DOE had not done so. We recommended that DOE establish a program management policy that addresses internal control standards and leading practices. DOE provided no comments on our recommendation. After we issued our report, the President signed the 2016 Program Management Improvement Accountability Act, which requires the development of standards, policies, and guidelines for program and project management across the federal government. We will continue to monitor and report on the Act’s implementation as part of our biennial high risk updates, and we will also include an assessment of the effectiveness of the standards, policies, and guidelines that are to be developed. In a November 2016 report, we found that DOE and NNSA had not established training programs, such as a career development program, for program managers. Program managers are responsible for interacting with project managers to provide support and guidance on individual projects, but they also must take a broad view of program objectives and organizational culture. In contrast, DOE had established a training program for project managers, which DOE said was open to program managers. In the absence of a current DOE or NNSA training program for program managers, most of the NNSA program managers we interviewed did not have training related to program management. As a result, we concluded that NNSA may have difficulty developing and maintaining a cadre of professional, effective, and capable program managers. We recommended that DOE establish a training program for program managers. DOE provided no comments on this report. DOE has instituted project management reforms that—if fully implemented—will help ensure that future projects are not affected by the challenges that have persisted for DOE’s major legacy projects. Specifically, DOE has taken action on certain major projects, but has not consistently applied these reforms, and in particular, DOE has not applied such reforms to its largest legacy cleanup project at its Hanford Site in Washington state. As we found in a May 2015 report, DOE continues to allow construction of certain Waste Treatment and Immobilization Plant (WTP) facilities at DOE’s Hanford Site before designs are 90 percent complete. This contrasts with DOE’s revised project management order that now requires a facility’s design to be at least 90 percent complete before establishing cost and schedule baselines and cost and schedule estimates that meet industry best practices. The WTP is DOE’s largest project, and it has faced numerous technical and management challenges that have added decades to its schedule and billions of dollars to its cost. We recommended in May 2015 that DOE (1) consider whether to limit construction on the WTP until risk mitigation strategies are developed to address known technical challenges, and (2) determine the extent to which the quality problems exist, in accordance with its quality assurance policy, for the facilities’ systems that have not been reviewed to determine if additional vulnerabilities exist. However, as of September 2016, DOE has not yet implemented our recommendations. Notably, after we issued our report, DOE announced in December 2016 that the cost estimate for one portion of the WTP—the part needed to treat a fraction of the low- activity waste—had increased to nearly $17 billion. This cost estimate does not include the costs for a majority of the WTP’s waste treatment scope, including high-level waste treatment. In light of longstanding challenges with major projects, such as with the WTP, we believe DOE must begin to apply project management reforms to the projects that need them the most. Having the right people and resources is necessary to mitigate risks, but it is not always sufficient to ensure that risks are identified and appropriately addressed. As we have previously reported, management must foster a culture in which staff are encouraged to identify risks and use their expertise to proactively mitigate them. In July 2016, we examined DOE’s effort to evaluate the environment for raising concerns without fear of reprisal. We found, among other things, that DOE used flawed and inconsistent methodologies to evaluate the environment for raising safety and other concerns and therefore could not reliably judge its openness or ensure that appropriate action was taken in response to evaluation results. We noted that several factors may limit the use and effectiveness of mechanisms for contractor employees to raise concerns and seek whistleblower protections. We also found that DOE infrequently used its enforcement authority to hold contractors accountable for unlawful retaliation against whistleblowers, issuing just two violation notices in the past 20 years. Additionally, in 2013, DOE determined that it did not have the authority to enforce a key aspect of policies that prohibit retaliation for nuclear safety-related issues—despite having taken such enforcement actions in the past. In response to our recommendations, DOE has started the process of updating its Integrated Safety Management policies and guidance, but it is too early to tell whether the updated regulation will address the concerns we raised in our July 2016 report. DOE also faces challenges with addressing its environmental liabilities. In February 2017, we added the federal government’s environmental liabilities to our High-Risk List. Specifically, we found that the federal government’s environmental liability has been growing for the past 20 years—and is likely to continue to increase—and that DOE is responsible for over 80 percent ($372 billion) of the nearly $450 billion reported environmental liability. Notably, this estimate does not reflect all of the future cleanup responsibilities that DOE may face. In addition, DOE has not consistently taken a risk-informed approach to decision-making for environmental cleanup, and DOE may therefore be missing opportunities to reduce costs while also reducing environmental risks more quickly. Our recent work in this area has also identified opportunities where DOE may be able to save tens of billions of dollars. DOE’s total reported environmental liability has generally increased over time. Since 1989, EM has spent over $164 billion to retrieve, treat, and dispose of nuclear and hazardous waste and to date has completed cleanup at 91 of 107 sites across the country (the 91 sites were generally viewed by DOE as the smallest and least contaminated sites to address). Despite billions spent on environmental cleanup, DOE’s environmental liability has roughly doubled from a low of $176 billion in fiscal year 1997 to the fiscal year 2016 estimate of $372 billion. In the last 6 years alone, EM has spent $35 billion, primarily to treat and dispose of nuclear and hazardous waste and construct capital asset projects to treat the waste (see figure 3 for EM’s annual spending and growing environmental liability). According to documents related to DOE’s fiscal year 2016 financial statements, 50 percent of DOE’s environmental liability resides at two cleanup sites: the Hanford Site in Washington State and the Savannah River Site in South Carolina. In its fiscal year 2016 financial statement, DOE attributed recent environmental liability increases to (1) inflation adjustments for the current year; (2) improved and updated estimates for the same scope of work, including changes resulting from deferral or acceleration of work; (3) revisions in technical approach or scope for cleanup activities; and (4) regulatory and legal changes. Notably, in recent annual financial reports, DOE has cited other significant causes for increases in its liability. Other causes have included the lack of a disposal path for high-level radioactive waste—because of the termination of the Yucca Mountain repository program —and delays and scope changes for major construction projects at the Hanford and Savannah River sites. We also reported in February 2017 that DOE’s estimated liability does not include billions in expected costs. According to federal accounting standards, environmental liability estimates should include costs that are probable and reasonably estimable, meaning that costs that cannot yet be reasonably estimated should not be included in total environmental liability. Examples of costs that DOE cannot yet estimate include the following: DOE has not yet developed a cleanup plan or cost estimate for the Nevada National Security Site and, as a result, the cost of future cleanup of this site was not included in DOE’s fiscal year 2015 reported environmental liability. The nearly 1,400-square-mile site has been used for hundreds of nuclear weapons tests since 1951. These activities have resulted in more than 45 million cubic feet of radioactive waste at the site. According to DOE’s financial statement, since DOE is not yet required to establish a plan to clean up the site, the costs for this work are excluded from DOE’s annually reported environmental liability. DOE’s reported environmental liability includes an estimate for the cost of a permanent nuclear waste repository, but these estimates are highly uncertain and likely to increase. In March 2015, in response to the termination of the Yucca Mountain repository program, DOE proposed separate repositories for defense high-level and commercial waste. In January 2017, we reported that the cost estimate for DOE’s new approach excluded the costs and time frames for key activities. As a result, the full cost of these activities is likely billions of dollars more than what is reflected in DOE’s environmental liability. In our annual report on Fragmentation, Overlap, and Duplication in the federal government that we issued in May 2017, we reported that DOE may be able to save billions of dollars by reassessing the rationale for its March 2015 proposal. In April 2017, the House of Representatives Committee on Energy and Commerce disseminated a discussion draft of legislation that could result in renewed efforts to open the Yucca Mountain repository. In addition, DOE may have insufficient controls in place to accurately account for its environmental liabilities. In January 2017, the DOE Inspector General reported a significant deficiency in internal controls related to the reconciliation of environmental liabilities. Moreover, DOE does not consistently take a risk-informed decision- making approach to its environmental cleanup mission to more efficiently use resources. As our and other organizations’ reports issued over the last 2 decades have found, DOE’s environmental cleanup decisions have not been risk-based, and there have been inconsistencies in the regulatory approaches followed at different sites. We and others have pointed out that DOE needs to take a nation-wide, risk-based approach to cleaning up these sites, which could reduce costs while also reducing environmental risks more quickly. In 2006, the National Research Council reported that the nation’s approach to cleaning up nuclear waste—primarily carried out by DOE—was complex, inconsistent, and not systematically risk- based. For example, the National Research Council noted that the current regulatory structure for low-activity waste is based primarily on the waste’s origins rather than on its actual radiological risks. The National Research Council concluded that by working with regulators, public authorities, and local citizens to implement risk-informed practices, waste cleanup efforts can be done more cost-effectively. The report also suggested that statutory changes were likely needed. In 2015, a review organized by the Consortium for Risk Evaluation with Stakeholder Participation reported that DOE was not optimally using available resources to reduce risk. According to the report, factors such as inconsistent regulatory approaches and certain requirements in federal facility agreements caused disproportionate resources to be directed at lower-priority risks. The report called for a more systematic effort to assess and rank risks within and among sites, including through headquarters guidance to sites, and to allocate federal taxpayer monies to remedy the highest priority risks through the most efficient means. In May 2017, we reported on DOE’s efforts to treat a significant portion of the tank waste at the Hanford Site. We found that DOE chose different approaches to treat the less radioactive portion of its tank waste—which DOE refers to as “low-activity waste” (LAW)—at the Hanford and Savannah River Sites. At the Savannah River Site, DOE has grouted about 4 million gallons of LAW since 2007. DOE plans to treat a portion of the Hanford Site’s LAW with vitrification, but it has not has not yet treated any of Hanford’s LAW and faces significant unresolved technical challenges in doing so. In addition, we found that the best available information indicates that DOE’s estimated costs to grout LAW at the Savannah River Site are substantially lower than its estimated costs to vitrify LAW at Hanford, and DOE may be able to save tens of billions of dollars by reconsidering its waste treatment approach for a portion of the LAW at Hanford. Moreover, according to the 21 experts that attended our meeting convened by the National Academies of Sciences, Engineering, and Medicine, both vitrification and grout could effectively treat Hanford’s LAW. Experts at our meeting also stated that developing updated information on the effectiveness of treating a portion of Hanford’s waste, called supplemental LAW, with other methods, such as grout, may enable DOE to consider waste treatment approaches that would accelerate DOE’s tank waste treatment mission, thereby potentially reducing certain risks and lifecycle treatment costs. We recommended that DOE (1) develop updated information on the performance of treating supplemental LAW with alternate methods, such as grout, before it selects an approach for treating supplemental LAW; and (2) have an independent entity develop updated information on the lifecycle costs of treating Hanford’s supplemental LAW with alternate methods. DOE agreed with both recommendations. Since 1994, we have made at least 28 recommendations related to addressing the federal government’s environmental liability and 4 recommendations to Congress to consider changes to the laws governing cleanup activities. Of these, 13 recommendations remain unimplemented. If implemented, these steps would improve the completeness and reliability of the estimated costs of DOE’s future cleanup responsibilities and lead to more risk-based management of the cleanup work. We believe these recommendations are as relevant, if not more so, today. NNSA also faces challenges implementing its nonproliferation programs under its Office of Defense Nuclear Nonproliferation (DNN). Specifically, in recently completed reviews of DNN programs, we have identified several challenges NNSA faces in how it measures performance and conducts program management of these efforts. As I testified last year, NNSA proposed in its fiscal year 2017 congressional budget request to terminate its Mixed Oxide (MOX) Fuel Fabrication Facility, which has been under construction since 2007 and for which NNSA has already spent approximately $4.6 billion on design and construction. NNSA’s request stated that its MOX fuel approach for disposing of 34 tons of weapons-grade plutonium will be significantly more expensive than anticipated and will require approximately $800 million to $1 billion annually for decades. Instead, NNSA proposed to focus on a new alternative to dilute the surplus plutonium and dispose of the material in a geologic repository. We have ongoing work examining the MOX facility and the extent to which WIPP has sufficient capacity to dispose of this quantity of plutonium. Specifically, we are assessing the extent to which DOE’s revised $17.2 billion cost estimate for completing construction of the MOX facility, and the $56 billion revised life-cycle estimate for completing the Plutonium Disposition Program using the MOX approach, met cost-estimating best practices. In addition, we are examining the status of NNSA’s development of a life-cycle cost estimate for completing the Plutonium Disposition Program using the dilute and dispose approach. Our review will also assess the extent to which DOE has sufficient disposal space and statutory capacity at WIPP to dispose of all defense transuranic waste, including the diluted plutonium resulting from the dilute and dispose approach. In June 2016, we found that the Nuclear Smuggling Detection and Deterrence (NSDD) program had developed a program plan, but that NSDD could not measure its progress towards activities and goals because its goals were not all measurable and performance measures were not aligned with its goals. Under this program, NSDD may not be able to determine when it has fully accomplished its mission and risks continuing to deploy equipment past the point of diminishing returns. NSDD also faces challenges in performing its work that are outside of its control, such as the changing conditions in partner countries from conflict or political upheaval. We recommended that NSDD develop a more detailed program plan that articulates when and how it will achieve its goals, including completing key activities such as the deployment of radiation detection equipment to partner countries and having these countries fully fund the sustainment and maintenance of this equipment. NNSA agreed with this recommendation. In February 2017, we found that NNSA was unable to demonstrate the full results of its research and development technology for preventing nuclear proliferation. Specifically, we reported that DNN’s Research and Development program does not consistently track and document projects that result in technologies being transitioned or deployed. Furthermore, we found that DNN’s Research and Development project performance is difficult to interpret because the program’s performance measures do not define criteria or provide context justifying how the program determined that it met its targets. This, in turn, could hinder users’ ability to determine the program’s progress. NNSA officials said that final project reports do not document their assessment of performance against baseline targets and that there is no common template for final project reports. We noted that documenting assessments that compare final project performance results against baseline targets for scope of work and completion date could enhance NNSA’s ability to manage its programs in accordance with these standards. More consistently tracking and documenting the transitioned and deployed technologies that result from DNN’s projects could also facilitate knowledge sharing within DNN, and would provide a means by which to present valuable information to Congress and other decision makers about the programs’ results and overall value. We recommended that NNSA consistently track and document results of DNN Research and Development projects and document assessments of final project results against baseline performance targets. NNSA agreed to take actions in response to both recommendations. Chair Fischer, Ranking Member Donnelly, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. If you or your staff members have any questions about this testimony, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Nathan Anderson, Assistant Director; Allison Bawden; Natalie Block, Antoinette Capaccio; William Hoehn; Amanda Kolling; and Diane LoFaro. The following is a selection of GAO’s recent work assessing the National Nuclear Security Administration’s and the Department of Energy’s Office of Environmental Management’s management efforts: Nuclear Waste: Opportunities Exist to Reduce Risks and Costs by Evaluating Different Waste Treatment Approaches at Hanford. GAO-17-306. Washington, D.C.: May 3, 2017. National Nuclear Security Administration: Action Needed to Address Affordability of Nuclear Modernization Programs. GAO-17-341. Washington, D.C.: April 26, 2017. Department of Energy: Use of Leading Practices Could Help Manage the Risk of Fraud and Other Improper Payments. GAO-17-235. Washington, D.C.: March 30, 2017. Nuclear Nonproliferation: Better Information Needed on Results of National Nuclear Security Administration’s Research and Technology Development Projects. GAO-17-210. Washington, D.C.: February 3, 2017. Nuclear Waste: Benefits and Costs Should Be Better Understood Before DOE Commits to a Separate Repository for Defense Waste. GAO-17-174,.Washington, D.C.: January 31, 2017. National Nuclear Security Administration: A Plan Incorporating Leading Practices Is Needed to Guide Cost Reporting Improvement Effort. GAO-17-141. Washington, D.C.: January 19, 2017. Program Management: DOE Needs to Develop a Comprehensive Policy and Training Program. GAO-17-51. Washington, D.C.: November 21, 2016. Department of Energy: Actions Needed to Strengthen Acquisition Planning for Management and Operating Contracts. GAO-16-529. Washington, D.C.: August 9, 2016. DOE Project Management: NNSA Needs to Clarify Requirements for Its Plutonium Analysis Project at Los Alamos. GAO-16-585. Washington, D.C.: August 9, 2016. Nuclear Waste: Waste Isolation Pilot Plant Recovery Demonstrates Cost and Schedule Requirements Needed for DOE Cleanup Operations. GAO-16-608. Washington, D.C.: August 4, 2016. Department of Energy: Whistleblower Protections Need Strengthening. GAO-16-618. Washington, D.C.: July 11, 2016. Combating Nuclear Smuggling: NNSA’s Detection and Deterrence Program is Addressing Challenges but Should Improve Its Program Plan. GAO-16-460. Washington, D.C.: June 17, 2016. Hanford Waste Treatment: DOE Needs to Evaluate Alternatives to Recently Proposed Projects and Address Technical and Management Challenges. GAO-15-354. Washington, D.C.: May 7, 2015. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | DOE's NNSA is responsible for managing the nuclear weapons stockpile and supporting nuclear nonproliferation efforts. DOE's Office of Environmental Management's mission includes decontaminating and decommissioning facilities that are contaminated from decades of nuclear weapons production. Over the last few years, GAO has reported on a wide range of challenges facing DOE and NNSA. These challenges contribute to GAO's continuing inclusion of DOE's and NNSA's management of major contracts and projects on the list of agencies and program areas that are at high risk of fraud, waste, abuse, and mismanagement, or are in need of transformation. GAO also recently added the U.S. government's environmental liabilities to this list. This statement is based on 13 GAO reports issued from May 2015 through May 2017 and discusses (1) challenges related to the affordability of NNSA's nuclear modernization plans, (2) the status of DOE's efforts to improve its management of contracts and projects, (3) challenges in addressing DOE's environmental liabilities, and (4) challenges facing NNSA's nonproliferation programs. The Department of Energy's (DOE) National Nuclear Security Administration (NNSA) faces challenges related to the affordability of its nuclear modernization programs. GAO found in April 2017 that these challenges were caused by a misalignment between NNSA's modernization plans and the estimated budgetary resources needed to carry out those plans. First, GAO found that NNSA's estimates of funding needed for its modernization plans sometimes exceeded the budgetary projections included in the President's planned near-term and long-term modernization budgets. Second, GAO found that the costs of some major modernization programs—such as for nuclear weapon refurbishments—may also increase and further strain future modernization budgets that currently do not anticipate these potential increases. GAO recommended in April 2017 that NNSA include an assessment of the affordability of its modernization programs in future versions of its annual plan on stockpile stewardship; NNSA neither agreed nor disagreed with that recommendation. DOE has taken several important steps that demonstrate its commitment to improving contract and project management, but challenges persist. In recent reports, GAO has noted progress as DOE has developed and implemented corrective actions to identify and address root causes of persistent project management challenges and progress in its monitoring of the effectiveness and sustainability of corrective actions. However, DOE's recent efforts do not address several areas of contract and project management where the department continues to struggle. GAO has made several recommendations related to these issues, many of which DOE has not yet implemented. DOE also faces challenges with addressing its environmental liabilities—the total cost of its cleanup responsibilities. In February 2017, GAO found that DOE was responsible for over 80 percent ($372 billion) of the U.S. government's estimated $450 billion environmental liability. However, this estimate does not reflect all of DOE's cleanup responsibilities. For example, in January 2017, GAO found that the cost estimate for DOE's proposal for separate defense and commercial nuclear waste repositories excluded the costs and time frames for key activities, and therefore full costs are likely to be billions of dollars more than DOE's reported environmental liabilities. To effectively address cleanup, GAO and other organizations have reported that DOE needs to take a nation-wide, risk-informed approach, which could reduce long-term costs as well as environmental risks more quickly. Since 1994, GAO has made at least 28 recommendations to address the federal government's environmental liabilities and 4 suggestions to Congress to consider changes to the laws governing cleanup activities. Of these, 13 recommendations remain unimplemented. Finally, NNSA faces challenges in implementing its nonproliferation programs. For example, in June 2016, GAO found that NNSA's Nuclear Smuggling Detection and Deterrence program had developed a program plan, but NNSA could not measure progress because not all of the program's goals were measurable, and performance measures were not aligned with the goals. As a result, NNSA may not be able to determine when the program has fully achieved its mission. GAO has made several recommendations related to NNSA's nonproliferation programs, some of which NNSA has yet to implement. GAO is not making any new recommendations in this statement. GAO has suggested that Congress consider taking certain actions and that DOE continue to act on the numerous recommendations made to address these challenges. GAO will continue to monitor DOE's implementation of these recommendations. |
NATO was formed in 1949 to promote stability in the North Atlantic area (see app. I for NATO’s North Atlantic Treaty) by uniting member nations’ efforts for collective defense and the preservation of peace and security. After expanding three times over the years, NATO currently has 16 members. With the collapse of the Soviet Union, the dissolution of the Warsaw Pact, and German reunification, NATO redefined its strategic concept at its Rome summit in 1991 to reflect the post-Cold War geopolitical landscape. The new strategic concept articulated a new conventional military force structure for NATO, greater emphasis on crisis management and conflict prevention, less reliance on nuclear forces, and committed the Alliance to pursuing greater cooperation with its former adversaries to the east. NACC, which includes all 16 NATO nations, all former members of the Warsaw Pact, Albania, and four observer states, held its first formal meeting in December 1991, the same month that the Soviet Union dissolved. NACC holds at least one regular meeting per year with consultations on such matters as political and security-related issues, defense planning questions, key aspects of strategy, force and command structures, democratic concepts of civilian-military relations. NACC work plans and discussions have broadened to include such topics as nuclear disarmament, crisis management, cooperation in peacekeeping, and progress on the PFP program. NATO initiated PFP in January 1994 to (1) expand and intensify political and military cooperation in Europe, (2) extend European stability eastward, (3) diminish threats to peace, and (4) build better relationships with former Communist countries through practical cooperation and commitment to democratic principles. Participation in the program does not require an intention to become a NATO member nor does it guarantee future membership in the Alliance, although, according to the Secretary of Defense, PFP is “the pathway to NATO membership for those partners that wish to join the Alliance.” Currently, 27 countries have joined PFP, and many have already agreed to Individual Partnership Programs, which spell out specific cooperative activity programs to take place between NATO and the partner. As of August 1995, NATO members and PFP partners had held 24 joint exercises providing practical military cooperation. NATO civil budget funding pays for NATO’s own administrative, security, and communications costs related to NACC and PFP, and for the construction of new facilities for partner countries at NATO civilian and military headquarters. Table 1 shows NATO civil budget funding for outreach activities from 1991 through 1996. In September 1995, NATO released an internal study of the rationale behind the enlargement and the way it should occur. In this study, NATO identified the following goals: (1) enhancing stability and security in the Euro-Atlantic area, (2) eliminating the old Cold War barriers without creating new ones between the East and West, (3) encouraging democratic and economic reforms in aspiring NATO members, (4) emphasizing common defense and extending its benefits and increasing transparency in defense planning and military budgets, (5) reinforcing the tendency towards integration and cooperation in Europe, (6) strengthening the Alliance’s ability to contribute to international security through peacekeeping activities, and (7) strengthening the trans-Atlantic partnership. The study defined the standards that aspiring members must be committed to meet before membership is offered. Militarily, these nations must commit to meeting minimum NATO standards of interoperability. Politically, aspiring NATO members are expected to establish civilian control over their militaries, a new concept for most of the former Warsaw Pact states. More subjectively, applicant nations must demonstrate a commitment to democratic values, as embodied in the NATO treaty. Aspiring members must also strive to peacefully eliminate internal ethnic disputes or territorial disputes with neighbors. Finally, new members must agree not to prevent other aspiring nations from joining NATO. Although the study articulated enlargement goals and new member entrance requirements, it did not assess any individual nation’s progress toward NATO membership. In fact, NATO members have not yet established a timetable for enlargement or who will be invited to join. The enlargement study also discussed how an aspiring member will join NATO. According to the study and NATO and U.S. Mission to NATO officials, the steps to be taken will be as follows. The process would begin with an informal invitation from NATO’s North Atlantic Council to the prospective member to enter into accession negotiations with NATO. Before this invitation is given, there must be unanimous consent among all of the current NATO members. If even one disagrees, the invitation cannot be offered. Once an invitation is made, the prospective member must make a formal commitment to join. NATO and the prospective member would negotiate a protocol of accession that sets forth in detail what each party expects of the other and any special or unique circumstances pertaining to the prospective member’s future membership. An example of special circumstances could include the provision that no NATO nuclear weapons could be deployed on the country’s territory during peacetime. The North Atlantic Council must approve the accession protocol and if it does not, the protocol must be amended until it meets Council approval. If it fails to do so, the process can be stopped and the prospective member’s bid to join NATO is effectively ended. Once the protocol is approved, it is signed by the North Atlantic Council and the prospective member. The accession protocol must then by ratified by the governments of all the current members and the prospective member and then enters into force. For example, in the United States, this will require a two-thirds majority in the Senate to amend and ratify the treaty. If the protocol is ratified by all members and the aspiring member, a formal invitation is made to the prospective member to accede to the North Atlantic (or Washington) Treaty. Once the aspiring member signs and gives the treaty to the U.S. government, it is considered a full member of NATO. If the accession protocol is not ratified by all 16 current members, the prospective member may not sign the Washington Treaty. At this point, the accession protocol could be amended again and resubmitted or the process can be terminated. In addition to its contribution through NATO common budgets, the United States provided about $53 million in fiscal year 1995 to PFP member countries through five bilateral programs that help to enhance their military equipment and operations. As table 2 shows, the fiscal year 1996 amount for these programs increased to about $125 million. This increase will largely support cooperative activities with PFP member countries. All of these programs, except PFP assistance, predate discussion of NATO enlargement. None were specifically designed to enhance the prospects of NATO membership. However, DOD believes it plays an important role in enhancing prospective new members’ capabilities, and as indicated, the Secretary of Defense has described PFP as the pathway to NATO membership. The United States has provided $130 million in bilateral support to the PFP program over 2 years. This funding assists PFP member countries to participate in NATO’s PFP exercises and helps them to obtain equipment through the Foreign Military Financing program. In addition, it is hoped that the program will foster stability in Eastern Europe through the development of civilian-controlled militaries and exposure to U.S. and other NATO members’ military policies and procedures. Further, DOD believes that U.S. bilateral funding of PFP lends credibility to the program and will encourage other countries to put forth a portion of their small budgets for PFP. Fiscal year 1995 bilateral funding for PFP assistance included $19.25 million for exercises and $10.75 million for interoperability programs. The $100 million of PFP assistance (under the Warsaw Initiative) provided to member countries in fiscal year 1996 allocates $60 million to the State Department to fund Foreign Military Financing in support of the Warsaw Initiative and $40 million to DOD to support individual partner participation in joint exercises and programs to enhance NATO-PFP interoperability. Poland is expected to be the recipient of the largest amount of aid in fiscal year 1996, $25 million. The Czech Republic, Hungary, Romania, and Ukraine are each expected to receive $10 million in U.S. bilateral PFP assistance, while Russia is expected to be offered $7 million in PFP assistance, including $5 million to help Russian troops participate in PFP exercises. Seventeen other nations may also receive some U.S. bilateral PFP assistance. Table 3 shows fiscal year 1996 bilateral PFP funding by country. The fiscal year 1996 Foreign Operations appropriation includes Foreign Military Financing funding, $60 million of which the State Department intends to use to support the Warsaw Initiative. Foreign Military Financing is a grant and loan program that provides financing for the acquisition of military articles, services, and training through the Foreign Military Sales system. This funding is provided through the State Department to support transfers of equipment to enhance the interoperability of partner forces with NATO, such as tactical radios, night vision equipment, and command, control, and communications upgrades. In addition, this funding is intended to support English language instruction and training to familiarize partner defense officials with U.S. and NATO defense structure, doctrine, and operations. Of the $40 million of PFP funds provided through DOD in fiscal year 1996, some is intended to pay for PFP member nations’ incremental costs incurred as a result of participation in training exercises between U.S. forces and PFP partner nations. These funds are intended to be a temporary measure to encourage partner governments to allocate a share of their national budgets for participation in PFP. For example, according to U.S. officials, Poland, Hungary, and the Czech Republic included PFP funding in their 1995 budgets. Through August 1995, there were 24 PFP exercises hosted at various locations in Europe and the United States. The first of the exercises, Cooperative Bridge, was held in Poland in mid-September 1994. Cooperative Nugget, held August 11 through 26, 1995, at the Joint Readiness Training Center at Fort Polk, Louisiana, was the first PFP exercise hosted by the United States. In addition to exercises, the $40 million provided through DOD will also fund (1) studies in support of the Regional Airspace Initiative, (2) the Defense Resource Management Study, and (3) PFP Information Management System. The first two of these initiatives were originally funded from other DOD accounts, but now are consolidated into the PFP assistance program. At this time, DOD officials are uncertain as to what portion of the $40 million will be allocated to each of these programs due to limitations prohibiting these funds from being used to purchase equipment that will be transferred to a foreign country. The Regional Airspace Initiative is intended to modernize civilian and military air traffic control, air sovereignty, and airspace management of Central and Eastern European nations. According to DOD officials, the initiative supports the U.S. policy that advocates peacetime control of national airspace by civil authorities, civil and military cooperation, and information exchange throughout the region to build regional confidence and security. Regional Airspace Initiative studies provide Central and Eastern European nations, as well as the United States, with an architecture for system modernization which will make air travel safer for all nations flying in the region. The Defense Resource Management Study is an Office of the Secretary of Defense-sponsored and -managed initiative to help emerging Eastern European democracies develop defense planning, programming, and budgeting systems compatible with those of NATO countries. The Office of the Secretary of Defense makes available teams of analysts to advise their host country counterparts on compiling force capability and cost data and analyzing alternative force structures. Fiscal year 1995 funding supported such studies in Bulgaria, Romania, Hungary, and Albania. The PFP Information Management System is intended to develop basic communications capabilities with cooperating nations and international organizations. It is intended to assist PFP members to become more interoperable with NATO. A processing center at Caserne Daumerie, Belgium, will provide capabilities such as real-time dial-up voice communications, data conferencing, imagery storage and exchange, and electronic mail. According to DOD, partner countries must pay a share of the costs to participate in the PFP Information Management System. In fiscal year 1995, $4 million of Office of the Secretary of Defense/Joint Staff funds supported DOD costs associated with the PFP Information Management System. The Joint Contact Team Program is a bilateral effort, conducted by the U.S. European Command, in coordination with U.S. embassy and host nation military personnel, to ensure constructive military activities and to model successful civil-military relations in a democracy. Rather than conducting formal training or supplying equipment, military liaison teams exchange ideas, share concepts, and demonstrate operational methods to host nation military personnel. The Joint Contact Team Program is executed by the Joint Chiefs of Staff, in coordination with the Defense Security Assistance Agency. Funding for the Joint Contact Team Program in Europe increased from $6 million in fiscal year 1993 to $10 million in fiscal year 1994. In fiscal year 1995, $10 million was provided through the Foreign Operations appropriation. For fiscal year 1996, $15 million was allocated by DOD to support the Joint Contact Team Program as a Traditional Commander-in-Chief Activity in the European theater (see app. II for a country-by-country breakdown of fiscal year 1995 Joint Contact Team Program funding). Established in the late 1950s, the International Military Education and Training program provides military education and training on a grant basis to allied and friendly nations’ militaries for such things as (1) increasing their exposure to the proper role of the military in a democratic society, including human rights issues, and to U.S. professional military education and (2) helping to develop the capability to teach English. In fiscal year 1991, DOD began implementation of the congressionally directed Expanded International Military Education and Training program, which addresses management of military establishments and budgets, civilian control of military, and military justice and codes of conduct. It is available to civilian and military officials, including nondefense agency civilians. Also in 1991, Central and Eastern European countries began to receive International Military Education and Training funding. Program funds are largely used to transport, train, and provide a supplemental living allowance for foreign students at military training facilities in the United States, or to send training instructors in-country. Funds have also been used to purchase English language laboratory equipment at in-country facilities. The International Military Education and Training program is funded through the Foreign Operations appropriation. Recipient nations are selected by the State Department with input from the Joint Chiefs of Staff and DOD. DOD implements the program through the Defense Security Assistance Agency. Total worldwide International Military Education and Training program funding in fiscal year 1995 was $26.35 million. Of this amount, about $6 million went to PFP member countries. The fiscal year 1996 appropriation for the program totals $39 million, of which $10.2 million is designated for PFP member countries (see app. III for a country-by-country breakdown of funding for fiscal years 1995 and 1996). According to U.S. European Command officials, some of the fiscal year 1996 International Military Education and Training money allocated to the U.S. European Command is expected to be used for the purchase of language laboratory equipment in Europe. Excess Defense Articles are those items owned by the U.S. government that are in excess of approved retention levels. These items may be transferred to a foreign country through the Foreign Military Sales program or by grant transfer. The recipient nation is generally responsible for the cost of transporting the items, upgrading them to meet their needs, maintaining the articles, and disposing of them when they have outlived their usefulness. The fiscal year 1996 Foreign Operations Appropriations Act included authority for DOD funds to be used to pay packing, crating, handling, and transportation costs for nonlethal excess defense articles for PFP countries. In fiscal year 1995, $6.63 million worth of excess items was authorized for transfer to PFP countries (see app. IV for a country-by-country breakdown of these transfers). The types of items transferred included surface vehicles, aircraft, and communications equipment. The total value of fiscal year 1996 transfers will not be available from DOD until fiscal year 1997. If NATO increases its membership, it will likely have to provide an undetermined amount of common funding to help the new member nations. Commonly funded programs receive money from one of three NATO common budgets, funded by NATO member contributions. The largest of these is the NATO Security Investment Program, formerly known as the Infrastructure Program. The United States provides 23.3 percent of NATO infrastructure project funding, the highest of any member state. NATO funding for infrastructure projects in new member nations would be limited to facilities and command, control, and communications systems for those forces made available and accepted for NATO use. According to U.S. officials, funding would also be limited to providing only the infrastructure that (1) is required to meet NATO interoperability standards, (2) qualifies under the strict eligibility rules for common funding, and (3) is afforded a high-priority by NATO military authorities. According to U.S. officials, NATO funding for new members would probably be gradual, would vary considerably, and would probably not exceed a total of $50 million for any individual nation during the first 3 to 5 years of their membership in NATO. Table 4 shows an illustrative example of the costs of required systems that may be funded by NATO. These are only some of the many potential costs. NATO may also consider funding the construction of fuel pipeline extensions, reinforcement and mobilization facilities, ammunition and fuel bunkers, port handling facilities, transportation infrastructure (such as rail and road systems), and facilities for any forward deployed forces in new members’ territory, provided these projects are afforded a high priority by NATO authorities. Ultimately, NATO will need to determine the systems and facilities to be provided to new members. However, neither NATO nor the United States know what the total costs of enlargement will be to NATO or individual members, both current and new. NATO will make a case-by-case analysis of its military needs and the requirements of new members as they join. NATO’s commonly funded infrastructure program is capped at an approximately $800-million annual ceiling. This means that common funding will be limited for new members, unless NATO removes the ceiling or current members contribute more. According to officials at the U.S. Mission to NATO, most NATO members have generally reduced defense budgets in recent years, and it is unlikely that they would make larger contributions to the infrastructure fund in the near term. However, according to these officials, most of NATO’s commonly funded projects will be completed by the end of 1997. This could allow common funds to be directed to projects in new member states if the budget remained at $800 million. In commenting on the draft of this report, DOD indicated that the backlog of NATO commonly funded projects will continue past the planning period. We were unable to verify which of these scenarios is accurate. Many of the costs resulting from NATO enlargement would be expected to be borne by the new members themselves. The total potential costs that could be incurred by each new NATO member to upgrade its military capabilities cannot be fully determined at this time because NATO has yet to define country-specific military requirements. New member countries may have to spend millions of dollars teaching English language skills, developing tactical communications systems (other than those funded by NATO), and learning NATO military doctrine. In addition, some nations may need to change their force structure or purchase new equipment to be compatible with those of NATO. Interoperability with NATO is a specific goal of PFP and its joint exercises. Interoperability is gaining increased importance under NATO’s Joint Combined Task Force concept, which envisions NATO allies operating with non-NATO nations in military operations using NATO forces and command and control assets. Most of the former Warsaw Pact nations may have to change from a divisional structure to a brigade-based structure to make their ground forces more compatible with NATO forces. For example, according to Polish and U.S. officials, Poland is reducing the level of its armed forces and transitioning to a brigade-based structure for its army. In addition, Poland is forming an airmobile brigade that will have a rapid reaction capability, a capability called for in NATO’s new strategic concept. Poland intends for this brigade to be as interoperable with similar NATO forces as possible, whether Poland is a member of NATO or not. New members will also be expected to bear the costs of participating in NATO—such as maintaining a mission at NATO headquarters and contributing to NATO’s three commonly funded budgets. Like current NATO members, new member states’ participation in NATO commonly funded budgets will be on a cost-share basis negotiated with NATO. According to U.S. officials, most of the aspiring NATO members are facing financial constraints and can realistically expect to do very little on their own in the next several years to make their military systems compatible with NATO’s systems. For example, according to U.S. officials, the Czech defense budget is only about $1 billion. The most that may be expected of these nations is gradual movement toward interoperability. For example, the Czechs have a 10-year modernization plan to upgrade their equipment and become interoperable with NATO forces. In commenting on the draft of this report, the State Department emphasized the U.S. leadership role in support of NATO’s enlargement process and the link between PFP membership and a partner state’s readiness for potential NATO membership. We have made changes reflecting these points in the report. State Department officials also indicated that estimates of the amount NATO may have to pay to support new members are “extremely soft, unsubstantiated numbers,” and characterized them as pure speculation. The figures were provided to us by officials at the U.S. mission to NATO and represent the best available information at this time. DOD indicated that our description of the purpose of DOD funding support for the Warsaw Initiative needed to be changed and we have modified the report in response to this concern. Specifically in regard to the Regional Airspace Program, DOD stated that the intent of the Regional Airspace Program is to provide information to U.S. consumers. However, DOD previously provided documents stating that the purpose of the program is “for modernizing . . . airspace management for nations of the Central and Eastern European region.” DOD also indicated that our characterization of the Security Investment Program funding is incorrect. DOD contends that the “backlog of NATO commonly funded projects will continue past the planning period.” Officials at DOD and at the U.S. Mission to NATO provided conflicting information. We modified our report to clarify that the information as presented was provided by officials at the U.S. Mission to NATO. DOD and the Department of State provided also technical corrections that have been incorporated in the report where appropriate. State and DOD comments are presented in their entirety in appendixes V and VI, respectively. To develop the information in this report, we interviewed officials and reviewed documents at the U.S. Mission to NATO, the U.S. European Command, the Defense Intelligence Agency, DOD, and State. We also conducted field work in Prague and Warsaw, where we interviewed and obtained information from U.S. embassy officials and Czech and Polish officials from the Ministries of Defense and Foreign Affairs. We performed our work from April 1995 to March 1996 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publically announce its contents earlier, we plan no further distriubution until 15 days after its issue date. At that time, copies of the report will be sent to other appropriate congressional committees and the Secretaries of Defense and State. We will also make copies available to other parties upon request. Please contact me at (202) 512-4128 if you or your staff have any questions concerning this report. Major contributors to this report were F. James Shafer, David J. Black, Charnel F. Harlow, and Michelle F. Kidd. The Parties to this Treaty reaffirm their faith in the purposes and principles of the Charter of the United Nations and their desire to live in peace with all peoples and all governments. They are determined to safeguard the freedom, common heritage and civilization of their peoples, founded on the principles of democracy, individual liberty and the rule of law. They seek to promote stability and well-being in the North Atlantic area. They are resolved to unite their efforts for collective defense and for the preservation of peace and security. The Parties undertake, as set forth in the Charter of the United Nations, to settle any international disputes in which they may be involved by peaceful means in such a manner that international peace and security, and justice, are not endangered, and to refrain in their international relations from the threat or use of force in any manner inconsistent with the purposes of the United Nations. The Parties will contribute toward the further development of peaceful and friendly international relations by strengthening their free institutions, by bringing about a better understanding of the principles upon which these institutions are founded, and by promoting conditions of stability and well-being. They will seek to eliminate conflict in their international economic policies and will encourage economic collaboration between any or all of them. In order more effectively to achieve the objectives of this Treaty, the Parties, separately and jointly, by means of continuous and effective self-help and mutual aid, will maintain and develop their individual and collective capacity to resist armed attack. The Parties will consult together whenever, in the opinion of any of them, the territorial integrity, political independence or security of any of the Parties is threatened. The Parties agree that an armed attack against one or more of them in Europe or North America shall be considered an attack against them all; and consequently they agree that, if such an armed attack occurs, each of them, in exercise of the right of individual or collective self-defence recognized by Article 51 of the Charter of the United Nations, will assist the Party or Parties so attacked by taking forthwith, individually and in concert with the other Parties, such action as it deems necessary, including the use of armed force, to restore and maintain the security of the North Atlantic area. Any such armed attack and all measures taken as a result thereof shall immediately be reported to the Security Council. Such measures shall be terminated when the Security council has taken the measures necessary to restore and maintain international peace and security. For the purposes of Article 5, an armed attack on one or more of the Parties is deemed to include an armed attack: on the territory of any of the Parties in Europe or North America, on the Algerian Departments of France, on the territory of Turkey or on the islands under the jurisdiction of any of the Parties in the North Atlantic area north of the Tropic of Cancer; on the forces, vessels, or aircraft of any of the Parties, when in or over these territories or any other area in Europe in which occupation forces of any of the Parties were stationed on the date when the Treaty entered into force or the Mediterranean Sea or the North Atlantic area north of the Tropic of Cancer. The Treaty does not affect, and shall not be interpreted as affecting, in any way the rights and obligations under the Charter of the Parties which are members of the United Nations, or the primary responsibility of the Security Council for the maintenance of international peace and security. Each Party declares that none of the international engagements now in force between it and any other of the Parties or any third state is in conflict with the provisions of this Treaty, and undertakes not to enter into any international engagement in conflict with this Treaty. The Parties hereby establish a council, on which each of them shall be represented, to consider matters concerning the implementation of this Treaty. The council shall be so organized as to be able to meet promptly at any time. The council shall set up such subsidiary bodies as may be necessary; in particular it shall establish immediately a defense committee which shall recommend measures for the implementation of Articles 3 and 5. The Parties may, by unanimous agreement, invite any other European state in a position to further the principles of this Treaty and to contribute to the security of the North Atlantic area to accede to this Treaty. Any state so invited may become a Party to the Treaty by depositing its instrument of accession with the Government of the United States of America. The Government of the United States of America will inform each of the Parties of the deposit of each such instrument of accession. This Treaty shall be ratified and its provisions carried out by the Parties in accordance with their respective constitutional processes. The instruments of ratification shall be deposited as soon as possible with the Government of the United States of America, which will notify all the other signatories of each deposit. The Treaty shall enter into force between the states which have ratified it as soon as the ratification of the majority of the signatories, including the ratification of Belgium, Canada, France, Luxembourg, the Netherlands, the United Kingdom, and the United States, have been deposited and shall come into effect with respect to other states on the date of the deposit of their ratifications. After the Treaty has been in force for 10 years, or at any time thereafter, the Parties shall, if any of them so requests, consult together for the purpose of reviewing the Treaty, having regard for the factors then affecting peace and security in the North Atlantic area, including the development of universal as well as regional arrangements under the Charter of the United Nations for the maintenance of international peace and security. After the Treaty has been in force for 20 years, any Party may cease to be a Party 1 year after its notice of denunciation has been given to the Government of the United States of America, which shall inform the Governments of the other Parties of the deposit of each such notice of denunciation. This Treaty, of which the English and French texts are equally authentic, shall be deposited in the archives of the Government of the United States of America. Duly certified copies thereof will be transmitted by that Government to the Governments of the other signatories. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the North Atlantic Treaty Organization's (NATO) future enlargement and plans to include the newly democratic states of the former communist bloc, focusing on: (1) actions NATO plans to take to enlarge itself; (2) U.S. bilateral assistance programs that enhance the military operations and capabilities of aspiring NATO members; and (3) the potential costs of enlargement to NATO and the new members. GAO found that: (1) in accordance with its 1991 strategic concept, NATO has initiated two programs designed to reach out to its former adversaries to the east, the North Atlantic Cooperation Council (NACC) and the Partnership for Peace (PFP) program; (2) in September 1995, NATO released an internal study examining the rationale for enlarging NATO and how it might occur; (3) NATO members have not yet established a timetable for enlargement or decided who will be invited to join; (4) the United States has five bilateral assistance programs that help to improve the operational capabilities of potential NATO members and other countries of Central and Eastern Europe and the Newly Independent States, these programs are bilateral PFP assistance (the Warsaw Initiative), Foreign Military Financing, the International Military Education and Training program, the Joint Contact Team Program, and Excess Defense Articles transfers; (5) all but the bilateral PFP assistance predate discussion of NATO's future enlargement; (6) in fiscal year (FY) 1995, the United States provided about $54 million in bilateral assistance to PFP member states through the five bilateral assistance programs and, in FY 1996, the United States will provide about $125 million; (7) this increase in assistance largely supports PFP bilateral assistance for cooperative activities with these nations and, of the total $179 million, about $130 million (or 73 percent) represents support for the PFP program; (8) neither NATO nor the United States knows what the total costs of enlargement will be to NATO or individual members, both current and new; increased membership will place new financial burdens on NATO's commonly funded infrastructure programs and on the new members themselves; (9) many of the costs of enlargement would be expected to be borne by the new members, some of whom may lack the ability to fund the changes necessary for their militaries to become interoperable with NATO forces; (10) the cost that each new member may incur cannot be fully determined because NATO has not yet defined country-specific military requirements; and (11) U.S. officials anticipate that these nations may require bilateral or multilateral financial assistance from the United States and other NATO members. |
DOD has increasingly emphasized joint military operations where, to the extent possible, service components are closely aligned and employed as a single joint force. To function effectively as a joint force, DOD has come to recognize the vital role of achieving information superiority over its adversaries by having better access to, and greater ability to share, information across the battlefield. In the late 1990s, the department began to articulate a vision for network-centric (or “net-centric”) warfare in which networking military forces improves information sharing and collaboration, which leads to enhanced situational awareness. Enhanced situational awareness enables more rapid, effective decisionmaking, which in turn enables improved efficiency and speed of execution and results in dramatically increased combat power and mission effectiveness. A high degree of interoperability is required to achieve battlefield information superiority. DOD defines interoperability as the ability of systems, units, or forces to exchange data, information, materiel, and services to enable them to operate effectively together. A lack of interoperability can make it difficult to hit time-critical targets and distinguish “friend” from “foe.” Figure 1 shows a scenario in which a sea- based system and a land-based system are tracking aircraft and are unable to integrate their views of a battlefield. This lack of interoperability can delay U.S. military response or contribute to a lethal mistake involving U.S. personnel and equipment. DOD has recognized that interoperable systems are critical to conducting joint military operations and that patching systems after the fact to improve communications is inefficient, and the department has established policies to promote systems interoperability. However, GAO and DOD’s Inspector General have reported in the past that these efforts have not been very effective. For example, in the first of a series of reports beginning in 2002, DOD’s Inspector General found that policies governing systems interoperability were inconsistent and that without consistent guidance the department was at risk of developing systems that lack the ability to fully interoperate. In 2003, we found that DOD’s process for certifying systems interoperability did not work effectively for ground- surface-based intelligence processing systems. In addition, DOD officials have said that added emphasis on joint operations and reliance on information technology creates an increasing requirement for more systems to exchange information, which in turn makes achieving interoperability among systems increasingly complex. DOD views the GIG as the cornerstone of information superiority, a key enabler of net-centric warfare, and a pillar of defense transformation. DOD defines the GIG as the globally interconnected, end-to-end set of information capabilities, associated processes, and personnel for collecting, processing, storing, disseminating, and managing information. The GIG’s many systems are expected to make up a secure, reliable network to enable users to access and share information at virtually any location and at anytime. Communications satellites, next-generation radios, and a military installations-based network with significantly expanded bandwidth will pave the way for a new paradigm in which DOD expects to achieve information superiority over adversaries, much the same way as the Internet has transformed industry and society on a global scale. Rather than striving for interoperability through efforts to establish direct information exchanges between individual systems, the focus of the new paradigm will be to ensure that all systems can connect to the network based on common standards and protocols. Figure 2 shows a general depiction of how DOD enables data exchanges in systems that lack the necessary connections and how DOD expects the GIG to break through such limitations. DOD has adopted a two-pronged approach to realizing the GIG: (1) invest in a set of new systems and capabilities to build a core infrastructure for the eventual GIG network (an overview of the five major acquisitions related to the GIG’s core network are listed in app. III) and (2) populate the network with weapon and information systems that are able to connect when the core network infrastructure becomes available. The effort to make the GIG a reality represents a different, inherently joint type of development challenge that requires a high degree of coordination and cooperation, but DOD is using a management approach that is not optimized for this type of challenge. Responsibility for developing and implementing the GIG resides with numerous entities, with no one entity clearly in charge or accountable for investment decisions. Because the GIG will comprise a system of interdependent systems, it needs clearly identified leadership that has the authority to enforce decisions that cut across organizational lines. Without a management approach optimized to enforce investment decisions across the department, DOD is at risk of continuing to develop and acquire systems in a stovepiped and uncoordinated manner and of not knowing whether the GIG is being developed within cost and schedule, whether risks are being adequately mitigated, and whether the GIG will provide a worthwhile return on DOD’s investment. Consequently, interoperability problems could continue to hamper DOD in fielding a joint, net-centric force. Development of the GIG is essentially a shared responsibility in DOD, with no single entity both equipped with authority to make investment decisions and held accountable for results. For example, as laid out in policy directives, DOD’s Chief Information Officer has overall responsibility for leadership and direction of the GIG. This includes developing, maintaining, and enforcing compliance with the GIG architecture; advising DOD leadership on GIG requirements; providing enterprisewide oversight of the development, integration, and implementation of the GIG; monitoring and evaluating the performance of information technology and national security system programs; and advising the Secretary of Defense and the heads of DOD components on whether to continue, modify, or terminate such programs. However, the Office of the Chief Information Officer generally has less influence on investment and program decisions than the services and defense agencies, which determine investment priorities and manage program development efforts. Consequently, the services and defense agencies have relative freedom to invest or not invest in the types of joint, net-centric systems that are consistent with GIG objectives. The end result of this shared responsibility is that neither the CIO nor the military services and defense agencies can be held fully accountable for the department’s success or failure in developing the GIG. More broadly, another result of this environment of shared responsibility is that the various offices and programs that are managing initiatives related to the GIG do so in a disparate manner. For example, a 2002 DOD study found that there was little unity of effort among the 80 separate initiatives and actions under way associated with joint command and control. The next year, DOD’s Defense Science Board reported that joint warfighting needs—such as joint battle management and joint intelligence, surveillance, and reconnaissance—are “neglected or spread in an uncoordinated fashion across multiple service and defense agency programs.” In 2004, the DOD Inspector General found that DOD lacked a strategy to integrate its net-centric initiatives, including clearly defined net-centric goals and organizational roles and responsibilities. In responding to this study, DOD’s Deputy Chief Information Officer (CIO) indicated that improvements could be made in the department’s guidance and approach to achieving net-centric goals, but that elements of a strategic plan have been or are being developed. However, according to the study, management comments from other DOD entities “clearly illustrate that DoD components needed leadership and strategic guidance and were unaware that the… had the lead for network-centric concepts.” The study also found that there was a lack of common understanding across DOD of what constitutes net-centric warfare—of which the GIG is a key enabler. Officials we interviewed in the Office of the Chief Information Officer stated that there is also a lack of common understanding throughout DOD about what is included in the GIG. DOD’s management approach to realizing joint, interoperable capabilities puts DOD at risk of duplicated efforts and suboptimal investment outcomes for command, control, and communications systems. Almost 20 years ago, we identified DOD’s decentralized management structure and the absence of an effective central enforcement authority for joint interoperability as two causes for joint command, control, and communications interoperability problems experienced in past military operations. We concluded that solving the interoperability problem would require “a great deal” of cooperation among the services and a willingness among them to pursue interoperability even when it conflicts with their traditional practices. In 1993, we found DOD had not made significant progress in improving on this situation. We recommended that DOD establish a joint program office with directive authority and funding controls for acquiring command, control, and communications systems and that DOD consolidate responsibility for interoperability in U.S. Atlantic Command (now U.S. Joint Forces Command). DOD responded that our recommendations would unnecessarily complicate DOD management, and DOD stated that planned and recently implemented policy, procedural, and organizational changes intended to address the problem needed time to take effect. In recent years, however, DOD has recognized that its approach to developing and fielding command, control, and communications systems was somewhat disjointed. In an effort to improve the situation, DOD tasked Joint Forces Command in 2003 to lead the development of advanced, integrated joint battle management command and control (JBMC2) capabilities departmentwide. While Joint Forces Command was given responsibilities to lead this effort, it does not control the resources for materiel solutions, and the command may not have sufficient influence over the services’ resource decisions to ensure the assessment framework it has developed for improving JBMC2 capabilities will be executed effectively. The framework for specific mission areas within JBMC2 will begin to be implemented in 2006, but formal agreements involving resourcing and level of service participation in these assessments have not yet been worked out. In addition, in 2004, the Joint Staff initiated the Net-Centric Operating Environment project in part to improve coordination of the GIG core network systems currently under development. The Joint Staff has proposed options to establish a stronger joint management structure for these systems, such as placing them under a single acquisition authority, and this analysis is being considered as part of DOD’s Quadrennial Defense Review effort. In the meantime, a study released in 2005 by the Center for Strategic and International Studies, a bipartisan think tank, reiterates the need for DOD to instill a greater joint focus in its management approach to achieving systems interoperability by transferring budget and acquisition authority for joint command, control, and communications from the services to a single joint entity. In the broader context of defense transformation—of which the GIG is a key component—we have pressed DOD to adopt a more centralized management approach to integrate and improve its business processes, human capital, and military capabilities. In 2004, we reported that no one person or entity had overarching and ongoing leadership responsibilities or accountability for the department’s transformation efforts, and we recommended that DOD establish clear leadership and a formal crosscutting transformation team with the responsibility for overseeing and integrating DOD’s transformation strategy and the authority to perform these responsibilities. DOD disagreed with our recommendations, indicating that the Secretary of Defense provides the leadership needed and that a crosscutting transformation team would represent an unneeded and confusing bureaucratic layer. However, we pointed out that (1) the day-to-day demands placed on the Secretary of Defense make it difficult for him to personally maintain the oversight, focus, and momentum needed to sustain transformation efforts and (2) that without a crosscutting team, DOD has no routine vehicle for maintaining a continued focus on transformation goals and no mechanism for resolving implementation issues that may arise. Similarly, to address problems DOD has long faced in managing its business systems and to guide the department’s business transformation efforts, we have proposed that DOD establish a more centralized management structure to control the allocation and execution of funds for DOD business systems. Specifically, due to the complexity and long-term nature of business transformation efforts, we reported that strong and sustained executive leadership is needed if DOD is to succeed. We believe one way to ensure strong, sustained leadership for DOD’s business management reform efforts would be to create a full-time, senior executive position for a chief management official, who would serve as the Deputy Secretary of Defense for Management. This position would serve as a strategic integrator to elevate and institutionalize the attention essential for addressing key stewardship responsibilities, such as strategic planning, enterprise architecture development and implementation, information technology (IT), and financial management, while facilitating the overall business management transformation within DOD. DOD’s position has been that the Deputy Secretary of Defense has the requisite position, authority, and purview to perform the functions of a Chief Management Officer. Although DOD has recently begun taking some positive steps to transform the department’s business operations, including establishing the Business Transformation Agency in 2005, we continue to believe that a Chief Management Officer position may better ensure that overall business transformation is implemented and sustained. DOD’s major decision-making processes are not structured to support crosscutting, departmentwide efforts such as the GIG. In some significant respects, the processes remain configured for investing in weapon and information systems on an individual service and defense agency basis. In addition, the department’s new process for determining requirements is still evolving, and it is not yet identifying shortfalls and gaps in joint military capabilities on a departmentwide basis. The resource allocation process remains structured in terms of individual service programs and outdated mission areas instead of crosscutting capabilities such as net- centricity, and it is inflexible in terms of accommodating emerging near- term requirements and rapidly advancing technologies. DOD’s acquisition process continues to move programs forward without sufficient knowledge that their technologies can work as intended; consequently, systems cost more and take longer to develop than originally planned and deliver less capability than initially promised. In addition, the acquisition process is not well suited to managing interdependencies among programs and fostering joint-service cooperation in development of weapon and information systems. Finally, the lack of integration among the three processes makes it difficult to ensure that development efforts are affordable and technically feasible. The three processes assessed in this report are the Joint Capabilities Integration and Development System (JCIDS); the Planning, Programming, Budgeting, and Execution (PPBE) process; and the Defense Acquisition System. Implemented in 2003, JCIDS is intended to enhance the process DOD uses to identify, assess, and prioritize joint military requirements, but service perspectives continue to drive requirements setting, a condition that has tended to impede the development of interoperable systems in the past. JCIDS is not yet identifying shortfalls and gaps in existing and projected joint military capabilities on a departmentwide basis, and the analytical framework that underpins JCIDS (capability-based assessments) is still evolving. Without crosscutting, department-level assessments, DOD is limited in its ability to develop a departmentwide investment strategy to support development of the net-centric systems critical to the GIG. JCIDS replaced the approximately 30-year-old Requirements Generation System, which DOD states frequently resulted in systems that were service rather than joint-focused, programs that duplicated each other, and systems that were not interoperable. Under this process, requirements were often developed by the services as stand-alone solutions to counter specific threats and scenarios. In contrast, JCIDS is designed to identify the broad set of capabilities that may be required to address the security environment of the 21st century. In addition, requirements under the JCIDS’ approach are intended to be developed from the “top-down,” that is, starting with the national military strategy, whereas the former process was “bottom up,” with requirements growing out of the individual services’ unique strategic visions and lacking clear linkage to the national military strategy. The Joint Requirements Oversight Council (JROC) has overall responsibility for JCIDS and is supported by eight Functional Capabilities Boards, which lead the capabilities-based assessment process. The requirements process remains service-focused to a significant extent. For example, the four members of the Joint Requirements Oversight Council are the services’ Vice Chiefs of Staff and the Assistant Commandant of the Marine Corps, an arrangement some studies contend grants too much influence to the services in setting requirements. The services are force providers—they supply the forces and develop the systems for military operations—but combatant commanders conduct joint military operations and thus represent the demand side of the requirements process. Combatant commanders are not, however, members of the Joint Requirements Oversight Council, and analyses conducted both prior to and following the implementation of JCIDS recommend either replacing the current members with representatives from the combatant commands or enlarging the Council to include such representatives. DOD has included representatives from the combatant commands on the Functional Capabilities Boards, along with representatives from nine other organizations (under the former requirements process, only representatives from the military services and the Defense Intelligence Agency served in a similar capacity). DOD officials indicate, however, that combatant commander participation on the boards is in reality limited and of ongoing concern, and a July 2005 Joint Forces Command briefing indicates that, so far, the combatant commands’ requirements do not drive the requirements process. In May 2005, DOD introduced a new mechanism for the combatant commands to identify capability gaps, and a DOD official told us the combatant commands are embracing this opportunity. However, the official also indicated that much requirements setting continues to be driven by the services at this point and that it is unclear how the services will respond to this type of input from the combatant commands. The JCIDS process is still evolving. A key enabler for capability assessments under JCIDS are joint concepts, which are visualizations of future operations that describe how a commander might employ capabilities to achieve desired effects and objectives. The majority of the joint concepts have completed an initial phase of development, but they continue to be evaluated and revised. These concepts are intended to describe future capability needs in sufficient detail to conduct a capabilities-based assessment, which is the methodology through which capability gaps and excesses are identified. A Joint Staff official states that capability-based assessment continues to be refined daily and has yet to produce a common framework or set of rules. At present, it can take several years to conduct a capabilities-based assessment under JCIDS, which is too slow according to a Joint Staff official associated with the process. However, the biggest challenge posed by a change such as JCIDS may be a cultural one: Joint Staff officials stated that the services are struggling with JCIDS, and the officials observed that the new process requires the services to change their behavior and think in a joint way. JCIDS is not yet functioning as envisioned to define gaps and redundancies in existing and future military capabilities across the department and to identify solutions to improve joint capabilities. At this point, requirements continue to be defined largely from the “bottom up”— by the services—although DOD uses the JCIDS framework to assess the services’ proposals and push a joint perspective. The importance of defining capability needs and solutions from a crosscutting, department- level perspective was highlighted in a prominent 2004 study chartered by the Secretary of Defense, which stated that “a service focus does not provide an accurate picture of joint needs, nor does it provide a consistent view of priorities and acceptable risks across DOD.” The study observed that the analytical capability for determining requirements largely resides in the military services, and it recommended that analyses of both joint needs and solutions to meet those needs be conducted at the department- level (in collaboration with the combatant commands, Joint Staff, defense agencies, services, and Office of the Secretary of Defense). The resource allocation process is not structured to facilitate investments in crosscutting capabilities such as the GIG. Unlike JCIDS, the resource allocation process is structured in terms of individual service and defense agency programs rather than in terms of joint capability areas, such as net- centricity. In this structure, the military services have come to dominate in the development of the DOD budget, designing their programs and budgets based more on individual, service-focused systems than on crosscutting capabilities with broad joint utility. In part, this situation reflects the persistence of a service-centric culture rooted in the services’ interpretation of their Title 10 authority to organize, train, and equip military forces. This resource allocation culture has contributed to DOD’s interoperability problems and made it difficult to capitalize on rapid advancements in information technology that can improve joint operational effectiveness. The predecessor to PPBE was the Planning, Programming, and Budgeting System (PPBS), established in the early 1960s to be DOD’s central strategic planning, program development, and resource allocation decision-making process. DOD expected the system to align the department’s investments in defense programs with overarching national security objectives and military strategy, integrating the previously unrelated programs and budgets of the military services into a coherent program and budget for DOD as a whole. One of the central products of this system was the multiyear Five Year Defense Plan (FYDP), which the Secretary of Defense could use to assess each military service’s contribution to DOD’s overall capability in crosscutting mission areas, termed as Major Force Programs. By categorizing service programs into a structure of Major Force Programs, the FYDP was intended to give the Secretary of Defense visibility over the totality of DOD’s capabilities, and thus enable the Secretary to make trade-off decisions among service investments in support of overall DOD objectives. The PPBS process fell short of these expectations in several respects: The services and defense agencies tended to receive the Secretary’s planning guidance after they had begun preparing their proposed programs and budgets, and the guidance has been criticized for not clearly articulating DOD funding priorities, reflecting resource constraints, containing performance measures, or providing enough detail to be useful. Together, these factors contributed to the services’ latitude to define their own investment priorities independent of the Secretary’s stated objectives. The Office of the Secretary of Defense reviews of the services’ program and budget submissions occurred late in the process. As a result, opportunities to build joint priorities (such as interoperable systems) into the services’ program and budget submissions were limited, and joint initiatives were often addressed late in the process when it was more difficult to make changes. PPBS was structured to allocate resources to meet longer-term, more predictable needs, which made it difficult to accommodate (1) near- term requirements such as those identified by combatant commanders based on lessons learned from recent or ongoing military operations and (2) rapidly advancing technologies. For example, commercially developed information technology tends to advance quickly, and it has been difficult to plan for advances in these technologies through the normal planning and budget process. PPBS was not well integrated with the requirements determination and acquisitions processes to ensure that development efforts were affordable and technically feasible. For example, more acquisition programs are started than DOD can afford, with the result that many programs must compete for funding. This situation in turn creates incentives to produce overly optimistic cost and schedule estimates and to over promise capability. The Major Force Programs that comprise the FYDP have changed little since the inception of PPBS in the early 1960s, despite changes in the operational environment and the emergence of strategic objectives such as the GIG. Some observers have recommended that the major program areas be substantially reconfigured to focus service programs on transformation initiatives, including creating a Major Force Program dedicated to C4ISR programs. In prior work, GAO also found that the FYDP did not provide visibility over some high-priority items, including information technology. Information technology investments as an area of funding are difficult to identify in the DOD budget, and the Office of the Secretary of Defense reports separately to Congress and OMB on DOD’s information technology expenditures. However, we have found material inconsistencies, inaccuracies, or omissions that limit the reliability of this reporting effort. In an effort to streamline the process and make it more efficient, DOD revised PPBS in 2001 to make department-level reviews of service and defense agency programs and budgets concurrent rather than sequential. In 2003 DOD further revised the process to increase its effectiveness and emphasize budget execution by requiring a full budget development cycle every other year rather than every year. DOD named the revised process the Planning, Programming, Budgeting and Execution process. These recent changes have not addressed some of the characteristics of the process that in the past made it difficult to address joint needs—such as systems interoperability: The services and defense agencies continue to have control over resources for command, control, and communications systems critical to the GIG, a condition that has in the past fostered development of service-specific systems with limited interoperability. As a DOD-wide interoperability solution, however, the GIG represents a different type of development challenge that requires a more cooperative, joint investment approach than has been typical of DOD in the past. If those who are responsible and accountable for the success of the GIG do not have control over resources, the department may continue to employ a stovepiped approach to investing in systems, and thus fail to fundamentally improve interoperability outcomes. PPBE is still not sufficiently integrated with the requirements and acquisition processes. In addition, the requirements determination process is now structured in terms of capabilities, but the resource allocation process continues to be structured in terms of individual service and defense agency programs rather than capability areas (such as net-centricity). Also, the Major Force Programs established with the FYDP remain virtually unchanged and no longer adequately reflect the needs of current and future missions. The PPBE process is still not flexible enough to quickly accommodate emerging technologies or requirements resulting from lessons learned. In recent years, some budgetary flexibility has been created through such mechanisms as the congressionally established Limited Acquisition Authority granted to U.S. Joint Forces Command to meet urgent, unanticipated warfighting needs. However, because there are no funds budgeted for this authority, the command has faced challenges in finding funding for projects. In response to GAO recommendations, DOD has issued a policy and taken initial steps toward implementing a portfolio-based management approach to investing in information technology systems. However, DOD was slow to formalize its policy, and it is too early to assess its effectiveness. DOD believes that managing its information technology investments by mission- oriented portfolios—a concept emphasized in the commercial sector— will (1) ensure information technology investments support the department’s vision, mission, and goals; (2) ensure efficient and effective delivery of capabilities to the warfighter; and (3) maximize the return on DOD’s investment. However, the DOD directive establishing information technology portfolio management indicates that portfolio management processes must work within the bounds of DOD’s three major investment decision-making processes. Given this guidance and the limitations of the PPBE process, it is unclear whether portfolio managers would be sufficiently empowered to meaningfully influence DOD components’ information technology investments. DOD has taken various steps in recent years to improve acquisition outcomes and focus acquisition decision-making on developing joint, net- centric systems, but the Defense Acquisition System remains essentially structured to support investments in service-oriented systems. To effectively develop the GIG and enable net-centric capabilities, the acquisition process must ensure that programs critical to the GIG not only achieve desired cost, schedule, and performance objectives, but—because the programs are interdependent and must work together to deliver a capability—it must also ensure that their development is closely synchronized and managed. In addition, to be interoperable, systems must be developed from a joint perspective and aligned with the architecture, standards, and data strategies established for the GIG. Further, the acquisition process must be adaptive to keep pace with the rapid advances that have taken place with information technology in recent years. Although DOD produces the best weapons in the world, GAO has found that the department’s acquisition process has long been beset by problems that cause weapon systems to cost more, take longer to develop and field, and deliver less capability than originally envisioned. In recent years, we recommended that DOD adopt a knowledge-based approach to acquisitions that reduces risk by attaining high levels of knowledge in three elements of a new product—technology, design, and production—at key consecutive junctures in development. DOD has taken steps in recent years to address these issues. In May 2003, DOD issued a revised acquisition policy that incorporated knowledge- based and evolutionary acquisition principles employed by leading commercial companies, with the aim of fostering greater efficiency and flexibility and reducing risk in the development and acquisition of weapon systems. The revised policy requires program managers to reduce risk by demonstrating attainment of essential knowledge at key program junctures and establishes as DOD’s preferred strategy developing systems incrementally, an approach in which the customer may not get the ultimate capability right away, but the product is available sooner and at a lower cost. However, we continue to see many programs move forward with a high degree of risk. For example, programs that are critical to the GIG, such as the Joint Tactical Radio System (JTRS) and Transformational Satellite Communications System (TSAT), have progressed without sufficient knowledge that their technologies could work as intended. Consequently, these programs have faced cost, schedule, and performance issues that have complicated DOD’s efforts to deliver these key GIG components as originally planned. Under the Defense Acquisition System, programs that are intended to produce interdependent systems are too often managed independently rather than as a system of systems. With increased efforts to promote net- centric capabilities, key transformational systems under development depend on capabilities being provided by other acquisition programs. However, DOD program management and acquisition oversight tend to focus on individual programs and not necessarily on synchronizing multiple programs to deliver interdependent systems at the same time, as required to achieve the intended capability. This focus has affected some recent DOD efforts to develop such systems of systems. We recently reported, for example, that the Army’s effort to develop a high-capacity communications network for higher-level command units, a program called the Warfighter Information Network-Tactical (WIN-T), was at risk because critical capabilities to be provided by other programs—unmanned aerial vehicles—may not be available when needed (one platform was not adequately funded for a dedicated communications capability and the other was still in the concept development phase). In addition, the Army’s Future Combat Systems program is at risk because its development schedule is not consistent with the fielding schedules for the Joint Tactical Radio Systems, on which it is critically dependent. Although DOD has acknowledged the growing importance of interoperability and recognizes the corresponding need to improve joint coordination in acquisitions, the military services continue to develop and acquire systems that have limited interoperability with other systems on the battlefield. This condition persists in part because the military services have traditionally focused on developing and acquiring systems to meet their own specific missions and have placed relatively less emphasis on developing and acquiring the types of interoperable systems needed to meet the demands of joint operations. Consequently, systems have often been developed to perform service-specific tasks and to support vertical exchanges of information. Rather than being developed around integrated architectures and common standards, systems have been designed and developed using different standards and protocols, and operate in different portions of the radio-frequency spectrum. DOD has had policies in place for several years to improve systems interoperability, including the designation of interoperability as a key performance parameter for all systems that exchange information and required testing for interoperability, but recent military operations have shown that interoperability problems persist. Recently, DOD has introduced new initiatives to improve interoperability and focus on the need for joint, net-centric systems. For example, in 2003, DOD replaced the requirement for an interoperability key performance parameter with a net-ready key performance parameter. Whereas the interoperability key performance parameter sought to ensure a system could exchange information directly with several other systems, the new net-ready key performance parameter requires a system to be able to exchange information with the “network.” In addition, DOD’s Chief Information Officer launched a net-centric program review effort in 2004, intended to improve the department’s focus on developing systems with net-centric attributes. While these efforts represent some commitment by DOD to improving the interoperability of the systems it develops and acquires, they may be of limited value unless interdependent programs are managed more effectively. One mechanism DOD has used for a relatively longer period of time to help address the systems interoperability problem is combining similar service requirements into joint-service development programs in an effort to ensure closer up-front coordination between services and to realize economic efficiencies. However, in practice the department has long struggled to achieve service buy-in, which is essential to joint acquisition success. For example, in 2003 we reported that the Joint Tactical Radio System program had difficulty getting the military services to agree on joint requirements and funding necessary to execute the program. We further found that the lack of joint-service cooperation on the program hampered production of necessary program documents such as the concept of operations and migration plans and that together these factors caused schedule delays. In the meantime, the Army made unplanned purchases of additional legacy radios to meet operational needs. We recommended that DOD strengthen the joint-program management structure by establishing centralized program funding, realigning the Joint Program Office under a different organizational arrangement, and placing the cluster development programs under the Joint Program Office. In the fiscal year 2004 National Defense Authorization Act, Congress directed DOD to take steps consistent with most of our recommendations. Similarly, DOD’s efforts to develop a Single Integrated Air Picture capability—whereby airborne tracking information from different sensor systems can be fused into a single picture—have also encountered joint management challenges. Although Joint Forces Command was given new oversight responsibilities in 2003 to promote stronger joint management of the Single Integrated Air Picture development effort, it has been difficult, according to officials from Joint Forces Command and the Single Integrated Air Picture program office, to resolve differences with the services regarding requirements and funding. While DOD’s acquisition policy now includes knowledge-based and evolutionary acquisition principles, the acquisition system operates too slowly and is too inflexible to keep pace with the rapid development of communications technologies essential to modern, interoperable command, control, and communications systems. For example, the National Research Council found in 1999 that the program management and oversight processes of the acquisition system operate on metrics optimized for weapon system acquisitions in which underlying technologies change more slowly than do the information technologies essential to modern command, control, and communications systems. The study concludes that metrics oriented to long acquisition cycles and full performance capability often do not allow for the timely integration of commercial technologies into command, control, and communications systems. More recently, in a 2002 study, DOD’s Joint C4ISR Decision Support Center concluded that technology for joint command and control capabilities progresses by a generation or more before the acquisition system can field them. The end result of these problems is that the acquisition system is not sufficiently responsive to warfighter needs for interoperable systems. DOD entities have developed short-term interoperability solutions (e.g., a communications network—the Joint Network Transport Capability—deployed to Iraq in 2004) and invested supplemental appropriations in legacy (largely commercial off-the-shelf) command, control, and communications systems urgently needed on the battlefield (e.g., in fiscal year 2005, Congress appropriated $767 million in supplemental funds for the legacy SINCGARS radios). DOD’s current approach to developing the GIG does not foster the level of coordination and cooperation needed to make the GIG a reality. DOD’s management approach for the GIG effort and the department’s decision- making processes contain fundamental structural impediments to success that recent changes to them have not been able to overcome. In fact, these vertically-oriented or “stovepiped” ways of doing business have helped perpetuate the very interoperability problem that the GIG is intended to overcome. We believe DOD will not be successful in “horizontal” or crosscutting initiatives such as the GIG unless it substantially changes its decentralized management approach and the service-centric, poorly integrated processes it uses to make investment decisions. The stakes are high. Management inefficiencies that were accepted as the cost of doing business in the past could jeopardize crosscutting efforts like the GIG because greater interdependencies among systems will mean that problems in individual development programs will ripple through to other programs, having a damaging effect on the overall effort. In addition, the likelihood of slowed growth and perhaps even reductions in DOD’s future budgets that may result from the nation’s long-term fiscal imbalance will limit the department’s ability to mitigate the impact of these problems with additional budgetary resources. Without significant change in DOD’s management approach and processes, we believe the department will not be able to achieve the GIG as envisioned and may have to settle for a different, more expensive solution farther out in the future than planned. To better accommodate the crosscutting nature of the GIG development effort, we recommend DOD adopt a management approach that will ensure a joint perspective is taken. In doing so, DOD should (1) consolidate responsibility, authority, and control over resources—within the existing management structure or in a new entity—necessary to enforce investment decisions that cut across organizational lines and (2) hold the organization accountable for ensuring the objectives of the GIG are achieved. In written comments on a draft of this report, DOD concurred with our findings and recommendation (DOD’s letter is reprinted in app. II). In commenting on our recommendation, however, DOD noted that Department of Defense Directive 5144.1 (May 2, 2005) indicates that the DOD Chief Information Officer is responsible for integrating information and related activities and services across the department. While this directive is intended to help strengthen the department’s management of investments such as the GIG, we remain concerned that the responsibility, authority, and accountability for developing the components of the GIG reside among many organizational entities across the department. DOD also noted in its comments that Department of Defense Directive 8115.01 (October 10, 2005) establishes policy for managing information technology by portfolios and that this portfolio approach should provide a critical tool for improving integration across the department’s major decision support systems (JCIDS, PPBE, and the Defense Acquisition System). We agree that the concept of portfolio management holds promise; however, we are not confident that DOD will be able to effectively implement the policy unless it substantially changes its decision-making processes and ensures that portfolio managers are sufficiently empowered to influence DOD components’ information technology investment decisions. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; the Assistant Secretary of Defense for Networks and Information Integration; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Under Secretary of Defense (Comptroller); the Director of the Defense Information Systems Agency; and interested congressional committees. We will provide copies to others on request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report or need additional information, please call me at (202) 512-4841 (sullivanm@gao.gov). Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To assess the Department of Defense’s (DOD) management approach for the Global Information Grid (GIG) and the extent to which the department’s primary decision-making processes support the GIG, we collected and reviewed (1) related legislation, directives, instructions, and guidance; (2) DOD policies and guidance related to the GIG and network- centric (or “net-centric”) governance; and (3) programmatic and technical documents pertaining to core GIG systems. We also conducted a review of relevant literature, analyzing studies on net-centric warfare, systems interoperability, and DOD management and investment decision making. We conducted this literature review by searching several types of databases using such search terms as Joint Capabilities Integration and Development System; Planning, Programming, Budgeting, and Execution process; Defense Acquisition System; interoperability; jointness; requirements; defense budget; Global Information Grid; etc. The databases were: the Defense Technical Information Center (DTIC) database, which collects thousands of research and development project summaries from defense organizations; Policy File, which provides abstracts and full-text articles on public policy research and analysis from research organizations, think tanks, university research programs, and publishers; and Dialog Defense Newsletters, which contains full-text newsletters on defense companies, products, markets, technologies, and legislation. We identified related analyses by searching online archives at GAO, individual think tanks such as RAND, and congressional agencies. We also individually searched online collections of various DOD organizations, including the Defense Science Board, the Office of Force Transformation, the Quadrennial Defense Review, and the Joint C4ISR Decision Support Center. We examined the selected documents to identify the positions taken within them regarding the nature and causes of problems related to interoperability, jointness, and DOD’s decision-making processes. We placed these results in a series of matrices to identify commonalities in the literature—such as concerns about organizational structure and the lack of integration among the three decision-making processes—and we used this synthesis to develop and support our findings. In addition, we conducted interviews with and received briefings from officials with a number of DOD organizations (including the Office of the Secretary of Defense; the Joint Staff; and the three military services—the Departments of the Air Force, the Army, and the Navy) that have responsibility for achieving the GIG. We also interviewed several subject matter experts (from academic, think tank, or consulting organizations) who have senior-level DOD experience or who have recently written on the operation of DOD and its key decision-making processes. We conducted our work from December 2004 through January 2006 in accordance with generally accepted government auditing standards. Appendix III: Five Major Acquisitions Related to the Core GIG Network and Information Capability To develop satellites to serve as the cornerstone of a new DOD communications infrastructure and provide high bandwidth connectivity to the warfighter. Some of the technologies that TSAT plans to use are laser cross-links, space-based data processing and Internet routing systems, and highly agile multibeam/phased array antennas. Joint Tactical Radio System (JTRS) To develop family of software-defined radios to interoperate with different types of existing radios and significantly increase voice, data, and video communications capabilities. To provide additional bandwidth and information access at key military installations within the United States and overseas via a combination of acquiring bandwidth from commercial providers as well as extending fiber optic networks to bases and installations that are located away from commercial networks. To enable network users to identify, access, send, store, and protect information. Also to enable DOD to monitor and manage network performance and problems. Is expected to require development of new capabilities and tools for tagging data so it is useful, providing users with capability to identify relevant information based on content and allowing users to freely exchange and collaborate on information. To enable DOD to protect the network and sensitive information. To provide information assurance and encryption support, including cryptography equipment (e.g., Internet protocol encryptors), firewalls, intrusion detection systems, etc. The overarching concept of the joint family of concepts. It broadly describes how the joint force is expected to operate in the mid- to far- term, reflects enduring national interests derived from strategic guidance, and identifies the key characteristics of the future joint force. Describe how a Joint Force Commander will accomplish a strategic mission through the conduct of operational-level military operations within a campaign. Describe how the future joint force will perform a particular military function across the full range of military operations. Distill JOC- and JFC-derived capabilities into the fundamental tasks, conditions, and standards of how a joint force commander will integrate capabilities to generate effects and achieve an objective in 10 to 20 years. In addition to the contact named above, staff making key contributions to this report were John Oppenheim, Assistant Director; Marie Ahearn; Lily Chin; Joel Christenson; Lauren M. Jones; Ron Schwenn; Jay Tallon; Hai Tran; and Susan Woodward. | Department of Defense (DOD) officials currently estimate that the department will spend approximately $34 billion through 2011 to develop the core network of the Global Information Grid (GIG), a large and complex undertaking intended to provide on-demand and real-time data and information to the warfighter. DOD views the GIG as the cornerstone of information superiority, a key enabler of network-centric warfare, and a pillar of defense transformation. A high degree of coordination and cooperation is needed to make the GIG a reality. In prior work GAO found that enforcing investment decisions across the military services and assuring management attention and oversight of the GIG effort were key management challenges facing DOD. This report assesses (1) the management approach that DOD is using to develop the GIG and (2) whether DOD's three major decision-making processes support the development of a crosscutting, departmentwide investment, such as the GIG. DOD's management approach for the GIG--in which no one entity is clearly in charge or accountable for results--is not optimized to enforce investment decisions across the department. The DOD Chief Information Officer has lead responsibility for the GIG development effort, but this office has less influence on investment and program decisions than the military services and defense agencies, which determine investment priorities and manage program development efforts. Consequently, the services and defense agencies have relative freedom to invest or not invest in the types of joint, net-centric systems that are consistent with GIG objectives. Without a management approach optimized to enforce departmentwide investment decisions, DOD is at risk of not knowing whether the GIG is being developed within cost and schedule, whether risks are being adequately mitigated, or whether the GIG will provide a worthwhile return on DOD's investment. The department's three major decision-making processes are not structured to support crosscutting, departmentwide development efforts such as the GIG. In some significant respects, the department's processes for setting requirements, allocating resources, and managing acquisitions encourage investing in systems on an individual service and defense agency basis. While the department has developed a new process for determining requirements, the framework to assess capability needs is still evolving; the new process is not yet identifying shortfalls and gaps in joint military capabilities on a departmentwide basis; and requirements-setting continues to be driven by service perspectives. In addition, the resource allocation process is structured in terms of individual service programs and outdated mission areas instead of crosscutting capabilities such as net-centricity, and it is not flexible enough to quickly accommodate requirements resulting from lessons learned or from rapidly emerging technologies. Also, the process for managing acquisitions is unsuited to developing a system of interdependent systems such as the GIG, and DOD has struggled to achieve service buy-in on joint-service development programs to address interoperability problems. Finally, the lack of integration among these three processes makes it difficult to ensure that development efforts are affordable and technically feasible. |
In 1997, before the establishment of the M+C program, about 5.2 million (14 percent) of Medicare’s 38 million beneficiaries were enrolled in HMOs that contracted with HCFA to serve Medicare beneficiaries. At that time, PPOs and other health insurance arrangements that had become common in the private sector were not permitted in Medicare. The HMOs that participated in Medicare tended to concentrate in urban areas and certain states. Consequently, about 25 percent of beneficiaries had no alternative to the traditional FFS program. In creating the M+C program, the Congress sought to build on, and expand, the existing HMO option. BBA permitted new types of health plans, such as PPOs, to participate in Medicare and included provisions designed to encourage a wider geographic availability of health plans. Medicare’s experience with HMOs has demonstrated that choice, and the ensuing competition among plans for market share, can produce important advantages for some beneficiaries. At a minimum, all HMOs were required to provide the services covered by Medicare’s traditional FFS program. In addition, HMOs that received Medicare payments that exceeded their costs of providing Medicare-covered benefits and normal profits had to use the excess to reduce beneficiary fees or provide additional benefits—such as coverage for prescription drugs or routine physical examinations. HMOs frequently exceeded program requirements and further reduced beneficiary fees or augmented their benefit packages to help retain existing members and attract new ones. As a result, nearly all Medicare beneficiaries enrolled in an HMO received a more comprehensive benefit package than those who remained in traditional FFS. For example, the average Medicare HMO in 1999 spent approximately $660 per member (an amount equivalent to 11.5 percent of its Medicare payment) on beneficiary fee reductions or benefit enhancements that were not required by Medicare. Medicare’s HMO experience has also demonstrated that some beneficiaries need information and help understanding their choices if they are to select the option that best meets their needs. A 1998 study found that many Medicare beneficiaries are unfamiliar with managed care concepts. Nearly one-third of the study respondents who belonged to Medicare HMOs did not understand basic differences between HMOs and FFS. A similar percentage of FFS respondents were uninformed. The authors concluded that only 16 percent of those beneficiaries who had some basic knowledge of HMOs knew enough to make an informed selection between FFS and HMOs. Misunderstandings about managed care concepts—such as the need to obtain referrals for specialty care and a limited choice of providers—may partly explain why some beneficiaries disenroll from HMOs shortly after becoming members. In 1998 we reported that the percentage of new members who left their HMOs within 3 months of enrolling was 10 percent or higher at 21 of 194 Medicare HMOs in 1996. Lack of a basic understanding of HMO processes can also hinder a beneficiary’s ability to obtain care through a health plan. According to a study by HHS’ Office of Inspector General, many beneficiaries who were denied services by their HMOs and subsequently disenrolled did not know they could appeal the HMOs’ decisions. Information that helps beneficiaries to compare specific Medicare health plans is important because covered benefits, fees, and consumer satisfaction can vary substantially among health plans. In our 1998 report, we found a wide variation in one potential indicator of beneficiary satisfaction—the plans’ disenrollment rates—among HMO plans that operated in the same market. For example, in Houston, Texas, the highest disenrollment rate was nearly 56 percent while the lowest rate was 8 percent. Many beneficiaries select a health plan based upon information contained in the plan’s advertisements and marketing materials. However, in 1996 we found that it was difficult to use this literature to compare various benefit packages because plans’ benefit descriptions were not required to follow a common format or use standard terminology. At that time there was no widely available, objective source of information to help beneficiaries compare their Medicare options. We recommended that HCFA compile comparative information and make it available to beneficiaries. In response to our report, HCFA agreed that beneficiaries needed more information and outlined several initiatives designed to help beneficiaries understand Medicare and compare their FFS and managed care plan options. In establishing the M+C program, the Congress included provisions designed to help Medicare beneficiaries become better informed health care consumers. BBA mandated that HCFA take an active role in educating beneficiaries about Medicare and the M+C program. The law specifically mandated that the agency compile and distribute comparative information about M+C plans. To complement these mandated education activities, HCFA took steps to make it easier for beneficiaries to use health plans’ marketing materials to compare benefit packages. Health plans are now required to make available a summary of benefits that follows a common format and uses standard terminology. Future changes in the Medicare program will heighten the importance of informed decisions. Historically, beneficiaries have been able to change health plans or switch between a health plan and the FFS program on a monthly basis. However, starting in November 2001, beneficiaries generally will make choices only during an annual open enrollment period for the following year. Each November, beneficiaries must decide whether they want to enroll in a particular M+C plan, change from one M+C plan to another, or return to the traditional FFS program. They will then be “locked in” to that choice for the following calendar year. Some proponents of this provision believed that constraining enrollment opportunities into a few weeks each year would encourage concentrated health plan advertising that would help make beneficiaries more aware of the M+C program and available health plans. The extent to which this will occur is uncertain. However, it is clear that the lock-in provision will magnify the consequences and importance of each beneficiary’s decision. Changes in the beneficiary population may add to the demand for information. A recent Kaiser Family Foundation survey found that 69 percent of Americans under age 65 want more private plans in Medicare.The same survey found the opposite among those over age 65—only 31 percent want greater choice of private plans, while the rest are content with Medicare as it has been. These findings suggest that future beneficiaries may be more interested in a private health plan option, increasing the need for information. To fulfill BBA’s beneficiary education requirements, HCFA established a NMEP that included four major approaches for delivering information to beneficiaries (see table 1). A new telephone help line (1-800-MEDICARE) handled about 3.9 million calls in 2000. The Medicare handbook, now titled Medicare & You, was expanded to contain comparative information on M+C plans and is now mailed annually to all households with a Medicare beneficiary (about 34 million). Prior to 1998, the Medicare handbook did not contain information on specific managed care plans and was generally mailed only to newly eligible beneficiaries or those who requested a copy. A Medicare Internet site (www.medicare.gov) was established to provide more detailed information on M+C plans and other topics. Community outreach efforts were implemented to inform beneficiaries who might face language or cultural barriers or otherwise need special assistance. Finally, to help ensure that its education efforts operated effectively, the agency sponsored a number of internal and external evaluations. HCFA phased in a Medicare telephone help line (1-800-MEDICARE) beginning in November 1998. First available in five states (i.e., Arizona, Florida, Ohio, Oregon, and Washington), the help line was expanded to cover all states in March 1999. As publicity about the help line increased and more beneficiaries became aware of its existence, the number of calls grew from an average of 27,000 calls per month shortly after the line went nationwide to an average of 326,000 calls per month in calendar year 2000. Even with the substantially increased call volume, most beneficiaries had no trouble getting through. According to a HCFA- sponsored study, during 2000 92 percent of calls were answered within 30 seconds. About 60 percent of Medicare help line callers speak directly to a customer service representative (CSR). The most common reason why beneficiaries call is to request a copy of Medicare & You or another publication. Beneficiaries also frequently ask how to apply for Medicare or get a replacement Medicare card, or ask questions about Medicaid, Medicare coverage and claims payment, or M+C plans. CSRs attempt to answer beneficiaries’ questions by following a prepared script. Because beneficiaries call with a wide variety of questions, about half the time the information required is not contained in the script. In those instances the CSR transfers the caller to an appropriate third party, typically a Medicare claims processing contractor (37 percent of the transferred calls); the state Medicaid office (19 percent); the Social Security Administration (16 percent); State Health Insurance Assistance Programs (SHIP) that support counselors who answer beneficiary questions about Medicare, Medigap, and Medicaid (10 percent); or other entities including M+C plans (18 percent). The remaining 40 percent of callers obtain the information they seek through the help line’s automated system. The automated system processes requests for publications and provides answers to frequently asked questions. HCFA produced and distributed a variety of printed materials to help beneficiaries understand the Medicare program and the options available to them. The most widely distributed document is the Medicare & You handbook. However, the agency also produces more than two dozen educational booklets and brochures on specific topics, including Medicare managed care. (Appendix II contains a list of these publications.) Prior to the enactment of BBA, HCFA annually produced a Medicare handbook but generally distributed it only to newly eligible beneficiaries and other individuals who requested a copy. Every few years HCFA mailed a current copy of the handbook to all beneficiaries. The intervals between mailings varied and depended partly on the extent to which the Medicare program had changed since the last mailing. However, in response to BBA, HCFA changed its handbook distribution practices. BBA required that beneficiaries receive comprehensive written information about the Medicare FFS program, the M+C program, and available options prior to Medicare’s newly established annual open enrollment period each November. To fulfill this requirement, HCFA began annual mailings of the Medicare handbook in 1998. The Medicare & You edition for 2001 contains between 80 and 92 pages, depending on the geographic area for which it is intended. The document’s length is due, in part, to BBA provisions that require the annual mailing to describe Medicare FFS program benefits, cost sharing, and liability for uncovered services; grievance and appeal rights; supplemental coverage options; the process for enrolling in M+C plans; and the potential effects on beneficiaries enrolled in M+C plans that withdraw from the program or reduce geographic service areas. Another reason for the length is that HCFA-sponsored research indicated that the handbook must use a large type size and limit the amount of text on each page to make it readable for the majority of the Medicare population. The handbook is designed as a reference guide and contains instructions and telephone numbers for obtaining additional information. Although every Medicare handbook describes both the traditional FFS program and the M+C program, the handbooks issued in geographic areas served by M+C plans also contain a section with comparative information tailored to those areas. These supplemental sections, which range in length from 24 to 36 pages, list every M+C plan that operates in the area along with the plan’s telephone number, the geographic areas it serves, the monthly premium it charges, and whether it provides coverage for prescription drugs. Beneficiaries are directed to call either the plan or Medicare’s telephone help line or to log onto Medicare’s Internet site to obtain more detailed information about a specific plan. The supplemental section also contains two quality indicators for each plan of members who rated their care as the best possible and the percentage of female members who received a mammogram during a 2-year period. The results for each plan are compared with all M+C plans operating in the same state. The mammogram percentages are also compared with FFS program beneficiaries in the same state. Finally, the section lists the percent of each plan’s Medicare members that disenrolled from the plan during the previous year. Medicare’s Internet site (www.medicare.gov) provides considerably more detailed information about the traditional FFS program and M+C plans than the Medicare & You handbook. Established in March 1998, the site includes a Medicare Health Plan Compare page that can generate a list of M+C plans available in a specific zip code, county, or state. It also provides detailed information on each plans’ benefit package, including cost-sharing requirements and coverage for 36 categories of services, such as physician visits, inpatient hospital, doctor and hospital choice, outpatient prescription drugs, physical exams, and vision services. In addition, Medicare Health Plan Compare contains plan quality indicators, such as the percentage of plan members who received an influenza vaccination, and consumer satisfaction indicators, such as the percentage of plan members that disenrolled within the last 2 years. The amount of plan- specific information contained in Medicare Health Plan Compare far surpasses that made available about Federal Employees Health Benefits Program (FEHBP) plans on the Office of Personnel Management’s (OPM) website. Medicare’s Internet site also includes a Medigap Compare page with detailed information on policies that supplement FFS Medicare, and a Nursing Home Compare page with detailed information on nursing home costs, features, and quality. The site also provides a wide array of information on Medicare coverage, benefits, eligibility, enrollment, and participating physicians, as well as information on getting help with medical expenses and state prescription drug assistance programs. In October 2000, the Medicare Health Plan Compare page was viewed about 629,000 times (see table 2). The number of individuals who viewed this page was likely less than 629,000 because this figure counts repeat views by individuals during a single session or subsequent sessions. All the pages combined were viewed a total of 3.1 million times during the month. HCFA’s Regional Education About Choices in Health (REACH) initiative sponsored a wide array of activities—such as health fairs and public service announcements—designed to reinforce other NMEP efforts and educate beneficiaries who might need extra assistance or information presented in a language other than English or in an alternative manner. REACH activities, conducted in conjunction with local community and business groups, were intended to meet the needs of the local community and its beneficiary population. By involving local groups and customizing its activities, HCFA intended to better communicate with beneficiaries from diverse cultural backgrounds or who lack proficiency in English, as well as those who may have difficulty reading printed material or obtaining information through other means. REACH-sponsored community health fairs are designed to provide beneficiaries with information about Medicare-covered services, M+C plans, supplemental insurance policies, and other potential sources of additional coverage such as Medicaid. The health fairs are intended to provide beneficiaries with sources of information on Medicare-related questions. REACH also funds public service announcements on radio and television and in local newspapers. To help reach certain beneficiary populations, some announcements are made through media that target specific ethnic groups. CMS’ partners in the REACH program are organized under the NMEP Alliance Network Partnership. The alliance consists of more than 100 partners—community organizations, business groups, national non-profit organizations (such as AARP), and private companies that process Medicare claims. Partners both provide advice to CMS and help disseminate information to beneficiaries. Among the major NMEP partners are the SHIPs, which receive the vast majority of community-based outreach funds. REACH activities are proposed by partners and approved by CMS’ regional offices. Every year, CMS formulates a national business plan to guide REACH activities. CMS’ regional offices adapt the national plan to suit local needs and formulate regional business plans. Each CMS regional office then reviews the proposals submitted by its partners, evaluates the proposals according to the criteria specified in its regional business plan, and decides whether to fund the activity. To help ensure NMEP’s success, HCFA initiated activities intended to assist in the design, support, and evaluation of the program. Some activities helped to lay the groundwork for NMEP. For example, HCFA consulted with experts on the best methods of conveying information to beneficiaries. HCFA established the Citizens Advisory Panel on Medicare Education. Along with the alliance partners, the advisory panel—which consists of 15 members from fields of senior citizen advocacy, health economics research, health insurers, providers and clinicians, and employers—provides input to guide NMEP activities. In addition, the agency surveyed beneficiaries about their preferred methods of receiving health care information. Other activities—such as training individuals in other organizations that help educate Medicare beneficiaries—are ongoing and serve to maintain and promote the program. This category also includes expenditures for the Consumer Assessment of Health Plans Study (CAHPS)—a survey of beneficiaries that provides, among other information, some of the comparative data on M+C plans presented on www.medicare.gov. Research that HCFA has sponsored to evaluate NMEP activities and their effectiveness also falls into this budget category. For example, HCFA used focus groups and beneficiary surveys to evaluate the Medicare & You handbook. To determine the effectiveness of the help line, HCFA hired contractors to survey callers and gauge their satisfaction with the help line. The contractor also placed calls to the help line to assess the ability of CSRs to handle beneficiary inquiries. The agency sponsors similar research that surveys users of its Internet site and tracks how visitors use the site. Spending on NMEP totaled $323.3 million during the first 3 fiscal years of its operation. Printed materials, the telephone help line, outreach efforts, and program evaluation and support services were responsible for most of the cost. Spending on the Internet site was relatively low. About 76 percent of the funds spent came from user fees collected from M+C plans. The remaining amount came from Medicare program funds and other sources. Recent legislation substantially reduces the total amount of user fees collected from M+C plans. If this revenue source is not replaced, future NMEP activities may have to be curtailed substantially. On average, HCFA spent $107.8 million annually to run NMEP in fiscal years 1998, 1999, and 2000. This average may somewhat understate the annual cost of NMEP because it includes expenditures in fiscal year 1998—the initial year when some activities were not fully implemented. For example, the Medicare handbook was not distributed to all beneficiaries and the help line did not go national until midway through the year. We report the 3-year average because limitations in HCFA’s accounting systems did not allow us to obtain an accurate view of the expenditures associated with a single year’s activities. Nonetheless, it is clear that relative spending on some activities changed over time. For example, Internet site expenditures grew from $1.5 million in fiscal year 1998 to $7.1 million in fiscal year 2000. However, in other cases the year-to- year variation in spending by category is difficult to interpret because activities may have been included in different categories in different years. HCFA records showed that at least 73 percent of the expenditures were for direct information services, including the Medicare & You handbook and other printed material (20.2 percent), the telephone help line (26.3 percent), community-based outreach (23.6 percent), and the Internet site (3.3 percent) (see table 3). The remaining expenditures (26.6 percent) were for program support and evaluation activities related to NMEP’s direct information activities. During NMEP’s first 3 fiscal years (1998 to 2000), approximately three- fourths of the expenditures were funded from user fees collected from M+C plans. As authorized by BBA, HCFA collected $285 million from plans during the 3-year period. (The law authorized the agency to collect $95 million each fiscal year.) Additional funding came from HCFA program management ($60.7 million) and peer review organization (PRO)accounts ($23.7 million). Not all of the funds earmarked for NMEP were spent in the 3-year period. Approximately $40.5 million in user fees and $5.5 million in other revenues remained available to help fund activities in fiscal year 2001. BBRA significantly reduced the amount of user fees CMS can collect from M+C plans in fiscal year 2001 and subsequent fiscal years. The total of $244.5 million in user fees spent in fiscal years 1998 through 2000 funded about three-quarters of the program. However, M+C plans objected to funding so much of NMEP because plans enrolled less than 20 percent of Medicare beneficiaries and because NMEP provided general information about Medicare in addition to information specific to the M+C program. To address this perceived inequity, BBRA specified that the total amount of user fees collected in a year would equal the percentage of Medicare beneficiaries enrolled in M+C plans multiplied by $100 million. In fiscal year 2001, for example, BBRA’s formula allows CMS to collect approximately $17 million in user fees. To adjust to the loss of approximately $78 million in annual user fee revenues without scaling back NMEP activities, a larger portion of HCFA’s Medicare operations budget had to be devoted to the program. In fiscal year 2001, $54.1 million of the $1.2 billion Medicare operations budget has been used to support NMEP, more than double the previous annual average of $20.2 million. In fiscal year 2001, the effect of reduced user fee revenues was partially offset by surpluses in NMEP’s accounts. CMS can draw on $15.5 million in previously collected but unspent user fees, $25.0 million in previously collected user fees allocated to a printing account, and $5.6 million in previously funded program management money held in a postage account (see table 4). Therefore, the full impact of the reduction will not be apparent until fiscal year 2002, when CMS will have to devote an additional $46.1 million of the agency’s budget to NMEP to maintain historical spending levels or scale back NMEP’s activities. Beneficiaries and beneficiary advocacy groups generally praised NMEP’s major activities. Industry officials representing M+C plans offered a mixed reaction to NMEP. Medicare’s telephone help line is viewed favorably by beneficiaries and M+C plans. Beneficiary advocates and industry representatives both said that the Medicare handbook could be improved and perhaps shortened. Industry officials also raised concerns about Medicare’s Internet site and the community outreach efforts. Overall, beneficiary advocates thought that current spending levels for NMEP— about $3 per beneficiary—are inadequate and more comparative information should be made available. Industry officials believe that NMEP should place a greater emphasis on the M+C program and that M+C plans should have more input into the design of NMEP and its activities (see table 5). HCFA-sponsored surveys of help line callers indicate that most beneficiaries are satisfied with the service. About 84 percent of surveyed callers were satisfied or very satisfied with the responses they received. About 11 percent of the surveyed callers indicated that they were dissatisfied or very dissatisfied. The remaining 5 percent of the surveyed callers either said that they were neither satisfied nor dissatisfied or did not answer the survey question. According to beneficiary advocacy groups, the telephone help line has become the source of information most familiar to the Medicare population. These groups believe the help line is valuable because it provides beneficiaries with one information resource that can answer most Medicare questions. An Arthur Andersen assessment of help line performance found that 95 percent of CSR calls were answered accurately or referred appropriately. Industry representatives agreed that the help line provides a valuable service. They also liked the single, easy-to- remember telephone number for beneficiaries. According to beneficiary focus group studies sponsored by HCFA, beneficiaries generally like the Medicare handbook and find it useful when they read it. Focus group participants said that the handbook was comprehensive, understandable, and a good reference. Most beneficiaries who responded to a survey included in some versions of Medicare & You said that the 2000 handbook was easy to read and contained the information they sought. Nonetheless, the focus group studies suggested that beneficiaries rarely read the handbook, but instead use it in a similar manner to the telephone help line. That is, most beneficiaries save the handbook and refer to it only if a change in personal circumstance or health status prompts them to seek information. Beneficiary advocacy groups told us that a Medicare handbook is a necessary element of NMEP, but that the current version could be improved. Some groups thought that the handbook can be confusing for beneficiaries and does not contain enough comparative information on available M+C plans to enable beneficiaries to make an informed choice. One group said that the handbook should be condensed to emphasize a few key messages. It believes that the handbook should be translated into more languages. (HCFA produced English and Spanish language versions of the handbook.) Of the four major NMEP information outlets, the Medicare handbook generated the most negative reaction from industry representatives. One industry group stated that the handbook over-emphasized traditional Medicare and that information about M+C plans appeared to be added as an afterthought. Representatives from this group said that annual written material is a necessary element of the NMEP, but felt that the handbook in its current form was not an appropriate mechanism for educating beneficiaries about choice. Another industry group said that the length of the handbook discouraged beneficiaries from reading it and learning about their Medicare choices. Beneficiary advocacy groups believe that the Medicare Internet site is a good source of information. However, they added that that they thought advocacy groups and beneficiaries’ families, not beneficiaries themselves, were the main users of the site. Although there are no data to indicate who uses the site, beneficiary access to the Internet has grown substantially in the last few years. According to CMS’ annual Medicare Current Beneficiary Survey (MCBS), 31 percent of beneficiaries had reported that they had Internet access in 2000, an increase from 10 percent in 1999. Of the people who used Medicare’s site, 85 percent found it very or somewhat useful. Although industry representatives stated that the concept of making M+C information available on the Internet was worthwhile, they expressed significant frustration with some of the information contained in the Medicare Health Plan Compare pages. Specifically, the representatives were concerned that CMS’ process for translating plans’ benefit package descriptions into standardized language for the Medicare Health Plan Compare pages sometimes produced benefit descriptions that could have confused beneficiaries. Representatives from beneficiary advocacy groups said that local education was an essential element of NMEP and generally expressed a favorable opinion of REACH. They were most positive about the work of the SHIPs. The representatives said that the one-on-one nature of much of the SHIPs’ outreach efforts was the preferred learning method of many beneficiaries. The representatives understood that community outreach can be expensive, but said that there is a large unmet need for these efforts. All three beneficiary advocacy groups we interviewed agreed that even though local community outreach cannot serve many Medicare beneficiaries, for those it does, it works very well. Industry representatives said that HCFA often did not include M+C plans in local education efforts or inform them of local events. Consequently, M+C plans were sometimes unprepared for the volume of beneficiary telephone calls following a NMEP media campaign. The focus of NMEP over the first 3 years has been to make more information available to beneficiaries. Most of HCFA’s research and implementation efforts concentrated on improving the mandated information outlets—the Medicare handbook, telephone help line, Internet site, and local education programs—and the content of the information available through those outlets. These efforts have aided beneficiaries ready to make a choice. However, HCFA-sponsored research suggests that NMEP may need to adopt new education strategies to encourage other beneficiaries to actively consider their Medicare options. Beginning in the fall of 2001, it will become more important for beneficiaries to be aware that M+C health plan alternatives to the traditional FFS program may be available in their geographic area and to understand each option and its implications. As required by the BBA, Medicare will now have an annual open enrollment period each November when beneficiaries may select either the FFS program or a specific M+C plan for the following calendar year. Beneficiaries who do not specify a different selection during that period will remain in the FFS program or their M+C plan. Beneficiaries will have strictly limited opportunities for changing their selection outside of the open enrollment period, a constraint known as “lock-in.” Although modifications to NMEP may be indicated to promote active, informed choice, CMS is constrained in its ability to alter certain aspects of the information campaign. BBA provisions specify the content and timing of many existing NMEP activities. Altering these activities could require statutory changes. In addition, short time frames each year hamper the agency’s ability to compile and distribute comparative information in advance of the open enrollment period. Recently, CMS announced changes to NMEP activities planned for this fall. The agency will undertake a $30 million advertising campaign to increase awareness of M+C and recent changes in the Medicare program. CMS also announced it will allow plans to submit their benefit package proposals for the 2002 contract year by September 17, 2001, instead of the July deadline specified in BBRA. This extension is intended to encourage plan participation in the M+C program. However, it will hamper the ability of both CMS and plans to distribute information to beneficiaries before the start of the annual enrollment period in November. To help minimize the impact that this delay might have on beneficiaries, the agency has also announced it will extend the enrollment period through December 2001. To date, HCFA has improved NMEP by enhancing or fine tuning existing activities. During the first 3 years of NMEP, the agency increased the amount of comparative information available through the handbook, telephone help line, and Internet site. It also improved the presentation of some information. For example, in response to focus group findings and comments from literacy experts, HCFA made the handbook easier for beneficiaries to read by increasing the typeface size and amount of white space surrounding the text. To make finding information easier, HCFA expanded the handbook’s table of contents and added color tabs for the telephone section. The agency modified portions of the handbook that focus group participants identified as confusing. HCFA also sought efficiencies to limit NMEP costs. For example, it evaluated the types of inquiries CSRs received and used the findings to modify the information available through the automated menus. As a result, the number of calls handled by the automated menus increased from 20 percent to 40 percent. Because calls that do not involve a CSR are substantially less expensive, these actions have helped to control the cost of the help line. HCFA recently began studying alternative education strategies that could require more substantial changes to NMEP. Research suggests that NMEP primarily helped the minority of beneficiaries who were already considering their Medicare coverage options. The comparative information and improved access to information may have enabled those beneficiaries to make more informed decisions. However, NMEP did not appear to motivate the majority of beneficiaries to consciously examine their present Medicare arrangements and consider whether alternatives might be better for them. Although health plans, through their marketing efforts, seek to move beneficiaries to such a decision point, the language in BBA indicates that CMS is expected to play a role too. Specifically, BBA directs the Secretary of HHS (and thus, by extension, CMS) to undertake activities that “promote an active, informed selection.” To that end, CMS is researching how NMEP might encourage more beneficiaries to consciously consider their Medicare options. Whether CMS decides to maintain the existing NMEP efforts or replace or augment them with new activities, the agency faces two major constraints. BBA requirements. The prescriptive nature of BBA’s NMEP provisions may limit CMS’ flexibility to alter existing activities. The Medicare handbook illustrates one of the major constraints facing CMS. Beneficiary advocacy groups and organizations representing health plans indicated that the current handbook is too long and raised doubts about whether it is the best vehicle to educate beneficiaries about their options. One beneficiary advocate questioned whether mailing the handbook annually to all Medicare households was the best use of NMEP resources. They suggested that CMS could distribute the handbook to new enrollees and make it available to others upon request, but conduct mass mailings only when changes in Medicare required an update. However, BBA requires an annual mailing and specifies an extensive list of topics—in addition to a list of available M+C plans and a comparison of plan options—that must be covered. For example, the mailing must include information on the FFS program’s covered benefits and cost sharing; procedures for selecting an M+C plan or the FFS program; beneficiary rights and the appeals process in both M+C plans and the FFS program; and descriptions of benefits, enrollment rights, and other requirements of Medicare supplemental policies (Medigap). Significantly modifying the content of the handbook or changing how frequently it is mailed may require a statutory change. Short time frames. Compressed time frames each year hampers the agency’s efforts to distribute more complete comparative information in printed form. A complaint voiced by beneficiary and health plan representatives is that the Medicare handbook contains limited comparative information about M+C plans. According to HCFA officials, plan benefit package details have not been available until late September when plans’ Medicare contracts for the coming year were approved. That left the agency too little time to assemble extensive data in a handbook that must be mailed out by mid-October, as required by BBA. HCFA therefore pre-approved selected aspects of each plan’s contract. The agency focused on the basic information it believed would most help beneficiaries make some initial decisions: plan service area, monthly premium, whether prescription drugs are covered, and how to contact the plan for detailed information. HCFA included other information, such as rates of mammography screening exams and beneficiary satisfaction with plans’ primary care physicians, that was not dependent on the contract approval package. Complete benefit information for each plan was not available until more than a month after the Medicare handbook was printed (see fig. 1). At that time, the information was posted on the Medicare Health Plan Compare pages of the Medicare Internet site. CMS recently announced that it would fund a $30 million advertising campaign this fall to increase beneficiaries’ awareness of the choices available to them and to encourage them to use the NMEP information channels to learn more about those choices. In addition, the advertising campaign is intended to help beneficiaries learn about Medicare’s new features—such as the annual enrollment and lock-in provisions, and coverage for preventive services and medical screening examinations. The agency will also extend the operating hours of the help line and add an interactive feature to the Internet site designed to help beneficiaries select the Medicare coverage option that best fits their preferences. CMS has made other decisions about the fall information campaign that illustrate the sometimes difficult trade-off between accommodating plans and serving beneficiaries. To encourage health plan participation in the M+C program, CMS has allowed plans additional time to prepare their 2002 benefit proposals. In a June 2001 memorandum, CMS notified M+C plans that for contract year 2002 the deadline for filing complete cost and benefit information in their adjusted community rate proposals (ACRP) would be moved from July 2, 2001 to September 17, 2001. M+C plans were still required to submit a non-binding summary by July 2, 2001. According to CMS officials, the agency expects to review and approve all of the ACRPs by October 26, 2001. The ACRP extension further shortens the time frames for NMEP activities, hampering the ability of CMS and health plans to disseminate information before the BBA-established November open enrollment period (see fig. 1). For example, 2002 cost and benefit information will not be posted on the Medicare’s Internet site until October 1. Plan benefit packages that have not been approved by CMS will include a disclaimer that the information is pending approval. CMS had planned to not include any information about specific health plans in the annual handbook mailed to Medicare households. However, an August 9, 2001 court order requires the Secretary of HHS to mail comparative information on health plans to beneficiaries at least 15 days before the beginning of the November open enrollment period (October 16), the deadline specified in BBA. To comply with the court order, CMS will prepare separate brochures containing comparative plan information and mail them by October 16. To reduce the potentially adverse effects of an abbreviated fall information campaign, the agency will allow health plans to distribute marketing materials with proposed benefit package information marked “pending Federal approval.” CMS will also extend the open enrollment period through the end of December. HCFA fulfilled the BBA’s basic requirements for NMEP by making information readily available through a number of communication devices such as printed materials, telephone help line, Internet site, and community outreach efforts. However, several HCFA-sponsored studies have suggested that these activities primarily aided those beneficiaries who were already reevaluating their Medicare options. NMEP activities did not appear to encourage other beneficiaries to learn more about choices available in the program. In this sense, NMEP has not been fully successful in promoting active, informed choice. Beneficiaries’ coverage decisions will soon become more important because these choices will be binding for a much longer period of time. Currently, beneficiaries may change health plans or switch between the traditional FFS and M+C programs monthly. However, in November and December 2001 beneficiaries will select how they will receive Medicare coverage during 2002—under the traditional FFS program arrangements or through a specific M+C plan. After beneficiaries make their initial selection they typically will have only one opportunity to switch their coverage arrangements until the start of the new benefit year in 2003. The agency only recently began studying approaches that might encourage more beneficiaries to actively consider their Medicare coverage options. However, CMS’ ability to modify NMEP to better promote active, informed choice or even to maintain current activities, may be constrained by BBA’s statutory provisions and the short time frames that precede each open enrollment period. Moreover, future NMEP activities will have to compete with other Medicare priorities for funding. To better promote beneficiaries’ active and informed selections among their Medicare coverage options, the Congress may want to consider allowing CMS more flexibility in conducting NMEP activities, especially with regard to the content, format, medium, and timing of information that the agency distributes to beneficiaries. In written comments, CMS stated that the agency generally agreed with the findings and observations in our report. CMS said that one of its primary goals is to ensure that Medicare beneficiaries have the information they need to make informed choices. The agency stated that is has been working to improve NMEP each year. It noted that recent and planned major improvements include an expansion of the hours of operation for the telephone help line, new information tools that telephone customer service representatives can use to help callers consider their health plan choices, and an advertising campaign designed to publicize Medicare information resources. CMS concurred with our matter for congressional consideration, stating that additional latitude in the conduct of NMEP activities could assist agency efforts to respond to beneficiary information needs in an appropriate and timely manner. CMS also provided technical comments, which we incorporated as appropriate. (CMS’ comments appear in app. III.) We are sending copies of this report to CMS Administrator and other interested parties who request them. If you or your staffs have any questions about this report, please call me at (202) 512-7119. This report was prepared under the direction of James Cosgrove, Assistant Director, by Cam Zola, Linda Radey, Jennifer Podulka, and Richard Neuman. To do our work we reviewed relevant sections of BBA and BBRA. We also interviewed various HCFA officials responsible for operating the different elements of NMEP. We also spoke with representatives from beneficiary advocacy groups (AARP, Medicare Rights Center, and the Center for Medicare Education) and health care plan associations (American Association of Health Plans and the Health Insurance Association of America). We analyzed information on the funding sources and costs for operating the program for the first 3 fiscal years (1998-2000). Further, we analyzed various operating results for the telephone help line and the Internet site. We reviewed the results of various assessments done by HCFA and its contractors on several aspects of the program. In addition we spoke with officials in two HCFA regional offices about NMEP and specifically the community-level education effort known as REACH. Our work was done from November 2000 through August 2001 in accordance with generally accepted government auditing standards. HCFA produced and distributed a variety of printed materials to help beneficiaries understand the Medicare program and the options available to them. The most widely distributed document is the Medicare & You handbook. However, as listed in the handbook and below, the agency also produces more than two dozen educational booklets and brochures on specific topics, including Medicare managed care. Medicare+Choice: HCFA Actions Could Improve Plan Benefit and Appeal Information (GAO/T-HEHS-99-108, Apr. 13, 1999). Medicare+Choice: New Standards Could Improve Accuracy and Usefulness of Plan Literature (GAO/HEHS-99-92, Apr. 12, 1999). Medicare Managed Care: Information Standards Would Help Beneficiaries Make More Informed Health Plan Choices (GAO/T-HEHS- 98-162, May 6, 1998). Medicare Managed Care: HCFA Missing Opportunities to Provide Consumer Information (GAO/T-HEHS-97-109, Apr. 10, 1997). Medicare HMOs: Potential Effects of a Limited Enrollment Period Policy (GAO/HEHS-97-50, Feb. 28, 1997). Medicare: HCFA Should Release Data to Aid Consumers, Prompt Better HMO Performance (GAO/HEHS-97-23, Oct. 22, 1996). | The Balanced Budget Act of 1997 (BBA) established the Medicare+Choice (M+C) program to expand health plan choices. BBA permitted Medicare participation by preferred provider organizations, provider-sponsored organizations, and insurers offering private fee-for-service plans or medical savings accounts. It also encouraged the wider availability of health maintenance organizations, which have long been an option for many beneficiaries. To help beneficiaries understand and consider all of their Medicare options, the National Medicare Education Program offers a toll-free help line, informational mailings to beneficiaries, an Internet site, and educational and publicity campaigns. During fiscal years 1998 through 2000, the Health Care Financing Administration (HCFA) spent an average of $107.8 million on the program annually. Most of the money came from user fees collected from M+C plans. Reaction to the program has generally been positive among beneficiaries and beneficiary advocacy groups, but representatives of M+C plans offered a mixed assessment. Program activities have increased the information available to beneficiaries on Medicare, the M+C program, and specific health plans. However, the extent to which the program has motivated beneficiaries to actively weigh their health plan options is unknown. |
Mr. Chairman and Members of the Subcommittee: We are pleased to be here today to participate in the Subcommittee’s oversight hearing on the U.S. Postal Service. My testimony will (1) focus on the performance of the Postal Service and the need for improving internal controls and protecting revenue in an organization that takes in and spends billions of dollars each year and (2) highlight some of the key reform and oversight issues that continue to challenge the Postal Service and Congress as they consider how U.S. mail service will be provided in the future. I will also provide some observations from our ongoing work relating to labor-management relations at the Postal Service and other areas. My testimony is based on our ongoing work and work that we completed over the past year. First, I would like to discuss both the reported successes and some of the remaining areas of concern related to the Postal Service’s performance. Last year, the Postal Service reported that it had achieved outstanding financial and operational performance. Financially, the Postal Service had the second most profitable year in its history. According to the Postal Service’s 1996 annual report, its fiscal year 1996 net income was $1.6 billion. Similarly, with regard to mail delivery service, the Postal Service continued to meet or exceed its goals for on-time delivery of overnight mail. Most recently, the Postmaster General announced that, during 1996, the Postal Service delivered 91 percent of overnight local residential mail on time or better. Additionally, during fiscal year 1996, the Postal Service’s volume exceeded 182 billion pieces of mail and generated more than $56 billion in revenue. While these results are encouraging, other performance data suggest that some areas of concern warrant closer scrutiny. For example, last year’s delivery of 2-day and 3-day mail—at 80 and 83 percent respectively—did not score as high as overnight delivery. Such performance has raised a concern among some customers that the Postal Service’s emphasis on overnight delivery is at the expense of 2-day and 3-day mail. Additionally, although its mail volume continues to grow, the Postal Service is concerned that customers increasingly are turning to its competitors or alternative communications methods. In 1996, mail volume increased by about one-half of the Service’s anticipated increase in volume. showed that its 1996 operating expenses increased 4.7 percent compared to a 3.9 percent increase in operating revenues. Labor costs, which include pay and benefits, continued to account for almost 80 percent of the Postal Service’s operating expenses, and the Postal Service expects that its costs for compensation and benefits will grow more than 6 percent in 1997. Moreover, controlling costs will be critical with regard to capital investments in 1997, as the Postal Service plans to commit $6 billion to capital improvements. Over the next 5 years, the Service plans to devote more than $14 billion in capital investments to technology, infrastructure improvements, and customer service and revenue initiatives. The Postal Service’s continued success in both operational and financial performance will depend heavily on its ability to control operating costs, strengthen internal controls, and ensure the integrity of its services. However, we found several weaknesses in the Postal Service’s internal controls that contributed to unnecessary increased costs. We reported in October 1996 that internal controls over Express Mail Corporate Accounts (EMCA) were weak or nonexistent, which resulted in the potential for abuse and increasing revenue losses over the past 3 fiscal years. Specifically, we found that some mailers obtained express mail services using invalid EMCAs and that the Postal Service did not collect the postage due. Consequently, in fiscal year 1995, the Postal Service lost express mail revenue of about $800,000 primarily because it had not verified EMCAs that were later determined to be invalid. at customers’ locations to be checked for valid EMCA numbers before they are accepted into the mail system. Similarly, we reported in June 1996 that weaknesses in the Postal Service’s controls for accepting bulk business mail prevented it from having reasonable assurance that all significant amounts of postage revenue due were received when mailers claimed presort/barcode discounts. We reported that during fiscal year 1994, as much as 40 percent of required bulk mail verifications were not performed. Bulk mail totaled almost one-half of the Postal Service’s total revenue of $47.7 billion in fiscal year 1994. At the same time, we found that less than 50 percent of the required follow-up verifications to determine the accuracy of the clerk’s work were being performed by the supervisors. In response to our recommendations, the Postal Service is developing new and strengthening existing internal controls to help prevent revenue losses in bulk mailings. For example, the Postal Service plans to improve the processes used in the verification of mail, including how units are staffed, how verifications are performed, and how results of acceptance work are reported and reviewed. Another area of recent concern has been the overall integrity of the Postal Service’s acquisitions. We concluded, in our January 1996 report, that the Postal Service did not follow required procedures for seven real estate or equipment purchases. We estimated that these seven purchases resulted in the Postal Service’s expending about $89 million on penalties, unusable, or marginally usable property. Three of the seven purchases involved ethics violations arising from the contracting officers’ failure to correct situations in which individuals had financial relationships with the Postal Service and with certain offerors. We also pointed out that the Office of Government Ethics was reviewing the Postal Service’s ethics program and reported that all areas of the program required improvement. The Office of Government Ethics subsequently made a number of recommendations designed to ensure that improvement of the Postal Service’s ethics program continues through more consistent oversight and management support. Since our January 1996 report, the Office of Government Ethics has completed three reviews to follow up on its open recommendations. Recently, the Postal Service developed guidance for avoiding conflicts of interest and filing financial disclosure reports as well as established procedures to ensure that the Office of Government Ethics is notified about all conflict-of-interest violations that are referred to the Department of Justice. As a result of these actions, the Office of Government Ethics closed its remaining open recommendations. Additionally, strengthening program oversight is essential to effective mail delivery. We found that the Postal Service did not exercise adequate oversight of its National Change of Address (NCOA) program. We reported that the Postal Service took a positive step toward dealing with the inefficiencies of processing misaddressed mail. However, at the same time, we found that the NCOA program was operating without clear procedures and sufficient oversight to ensure that the program was operating in compliance with the privacy provisions of federal laws. Accordingly, we recommended that the Postal Service strengthen oversight of NCOA by developing and implementing written oversight procedures. In response to our recommendation, the Postal Service developed written oversight procedures for the NCOA program. Most recently, we issued a report that describes how the Postal Service closes post offices and provides information on the number closed since 1970—over 3,900 post offices. We also provided information on the number of appeals and their dispositions, as well as some information about the communities where post offices were closed in fiscal years 1995 and 1996. Generally, the Postal Service initiated the closing process after a postmaster vacancy occurred through retirement, transfer, or promotion or after the termination of the post office building’s lease. In each case, the Postal Service proposed less costly alternative postal services to the affected community, such as establishing a community post office operated by a contractor or providing postal deliveries through rural routes and cluster boxes. changes to the Private Express Statutes. These Statutes were set up to ensure that the Postal Service has enough revenue to provide universal access to postal services to the general public and that certain mail, such as First-Class, will bear a uniform rate. In our September 1996 report, we emphasized the importance of recognizing the Statutes’ underlying purpose and determining how changes may affect universal mail service and uniform rates. Most important among the potential consequences is that relaxing the Statutes could open First-Class mail services to additional competition, thus possibly affecting postal revenues and rates and the Postal Service’s ability to carry out its public service mandates. However, at the same time, the American public could benefit through improved service. It will be important to take into account the possible consequences for all stakeholders in deciding how mail services will be provided to the American public in the future. Another key reform issue is the future role of the Postal Service in the constantly changing and increasingly competitive communications market. For example, the use of alternative communications methods such as electronic mail, faxes, and the Internet continues to grow at phenomenal rates in the United States and is beginning to affect the Postal Service markets. At the same time, the Postal Service’s competitors continue to challenge it for major shares of the communications market. According to the Postmaster General, the Postal Service has been losing market share in five of its six product lines. It seems reasonable to assume that these alternative communications methods are likely to be used more and more. In addition, international mail has become an increasingly vital market in which the Postal Service competes. In our March 1996 report, we pointed out that, although the Postal Service has more flexibility in setting international rates, it still lost business to competitors because rates were not competitive and delivery service was not reliable. We also identified several issues surrounding the Postal Service’s role in the international mail arena that remain unresolved. Chief among them is the appropriateness of the Postal Service’s pricing practices in setting rates for international mail services. We also reviewed postal reform in other countries to learn about their experiences. Recently, we issued a report on Canada’s efforts since 1981 to reform its postal service, the Canada Post Corporation (CPC). Although CPC retained basic letter mail services at a uniform rate, it also reduced the frequency of mail delivery to some businesses, as well as in urban and rural areas. CPC uses a regulatory rate-making process that includes the opportunity for public comment and government approval for basic domestic and international single-piece letters. However, postage rates for other mail services can be approved by CPC without issuing regulations or obtaining government approval. Some of the key concerns that have been raised by CPC customers include CPC’s closure of rural post offices and its conversion of others to private ownership. In addition, CPC’s competitors have expressed concern about whether CPC is cross-subsidizing the prices of its courier services with monopoly revenues. The Canadian government has responded to these concerns by continuing its moratorium on post office closings and directing CPC to discontinue delivery of unaddressed advertising mail. The government is also considering a call for additional government oversight of CPC. Mr. Chairman, as you are aware, we also have a number of ongoing reviews related to postal reform. For example, in concert with your focus on the future role of the Postal Service, we are currently reviewing the role and structure of the Postal Service’s Board of Governors in order to determine its strengths and weaknesses. The Board of Governors is responsible for directing and controlling the expenditures of the Postal Service, reviewing its practices, participating in long-range planning, and setting policies on all postal matters. In addition to obtaining the views of current and former Board members, we plan to provide information on the role and structure of Boards in other types of government-created organizations. Another issue important to postal reform that we are reviewing involves access to mailboxes. More specifically, we plan to provide information on (1) public opinions on the issue of mailbox restrictions; (2) views of the Postal Service and other major stakeholders; and (3) this country’s experience with mailbox security and enforcement of related laws, compared with the experiences in selected other countries. oversight is labor-management relations. As the Postal Service focuses on the significant challenges it faces to compete in today’s communications marketplace, unresolved labor-management relations disputes continue to hinder efforts to improve productivity. Generally, the long-standing labor-management problems we identified in 1994 still remain unresolved, despite the initiatives that have been established to address them. For example, the number of grievances requiring formal arbitration has increased almost 76 percent, from about 51,000 in fiscal year 1993 to over 90,000 in fiscal year 1996. These difficulties continue to plague the Service primarily because the major postal stakeholders (the Postal Service, four major unions, and three management associations) cannot all agree on common approaches for addressing their problems. We continue to believe that until the major postal stakeholders develop a framework agreement that would outline common objectives and strategies, efforts to improve labor-management relations will likely continue to be fragmented and difficult to sustain. The Government Performance and Results Act (GPRA) provides a mechanism that may be useful in focusing a dialogue that could lead to a framework agreement. GPRA provides a legislatively based mechanism for the major stakeholders, including Congress, to jointly engage in discussions that focus on an agency’s mission and on establishing goals, measuring performance, and reporting on mission-related accomplishments. GPRA can be instrumental to the Postal Service’s efforts to better define its current and future role. GPRA also emphasizes the need for stakeholders to recognize and address key internal and external factors that could affect the ability to achieve future goals. The GPRA consultation process provides the major postal stakeholders and Congress with opportunities to better understand the Service’s mission, proposed goals, and most importantly, the strategies to be used in attaining these goals, especially those that relate to the long-standing labor-management relations problems that challenge the Service. Given these challenges, GPRA provides a forum for stakeholders to participate in developing and reaching consensus on strategies for attaining results-oriented goals. June 1, 1997, on how the Service can best achieve the three major goals identified in the Federal Register notice. This comment period provides an opportunity for those who might be affected by decisions relating to the future of the Postal Service to voice their views on the strategies to be used by the Postal Service. Other forums may also be appropriate to further discuss issues that may be pertinent to specific stakeholders during this stage of the implementation process. As results-oriented goals are established, the related discussions can also provide a foundation for the stakeholders to reach consensus on a framework agreement. Successful labor-management relations will be critical to achieving the Postal Service’s goals. The Postal Service and Congress will need results-oriented goals and sound performance information to most effectively address some of the policy issues that surround the Postal Service’s performance in a dynamic communications market. Recognizing that the changes envisioned by GPRA do not come quickly or easily, sustained oversight by the Postal Service and Congress will be necessary. Finally, several other areas will likely continue to require the attention of both the Postal Service and Congress. One such area is the Postal Service’s automation efforts. The Postal Service has spent billions of dollars to ensure that an increase in productivity and an adequate return on planned investments are realized. Another area is the Postal Service’s 5-year capital investment plan for 1997-2001. It calls for investing $14.3 billion, of which $3.6 billion is designated for technology investments. Also included is $6.6 billion for planned infrastructure improvements such as maintaining and improving over 35,000 postal facilities and upgrading the vehicle fleet of more than 200,000 vehicles. In addition, customer satisfaction at both the residential and business levels will continue to be a critical area as the Postal Service strives to improve customer service in order to remain competitive. The Postal Service has made considerable progress in improving its financial and operational performance. Sustaining this progress will be dependent upon ensuring that the key issues we identified, such as controlling costs, protecting revenues, and clarifying the role of the Postal Service in an increasingly competitive communications market, are effectively addressed by the Postal Service and Congress. Mr. Chairman, this concludes my prepared statement. I have attached a list of our Postal Service products issued since January 1996. I would be pleased to respond to any questions you or members of the Subcommittee may have. U.S. Postal Service: Information on Emergency Suspensions of Operations at Post Offices (GAO/GGD-97-70R, April 23, 1997). U.S. Postal Service: Information on Post Office Closures, Appeals, and Affected Communities (GAO/GGD-97-38BR, Mar. 11, 1997). Postal Reform in Canada: Canada Post Corporation’s Universal Service and Ratemaking (GAO/GGD-97-45BR, Mar. 5, 1997). U.S. Postal Service: Revenue Losses From Express Mail Accounts Have Grown (GAO/GGD-97-3, Oct. 24, 1996). Postal Service: Controls Over Postage Meters (GAO/GGD-96-194R, Sept. 26, 1996). Inspector General: Comparison of Certain Activities of the Postal IG and Other IGs (GAO/AIMD-96-150, Sept. 20, 1996). Postal Service Reform: Issues Relevant to Changing Restrictions on Private Letter Delivery (GAO/GGD-96-129A/B, Sept. 12, 1996). U.S. Postal Service: Improved Oversight Needed to Protect Privacy of Address Changes (GAO/GGD-96-119, Aug. 13, 1996). U.S. Postal Service: Stronger Mail Acceptance Controls Could Help Prevent Revenue Losses (GAO/GGD-96-126, June 25, 1996). U.S. Postal Service: Unresolved Issues in the International Mail Market (GAO/GGD-96-51, Mar. 11, 1996). Postal Service: Conditions Leading to Problems in Some Major Purchases (GAO/GGD-96-59, Jan. 18, 1996). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed: (1) the performance of the Postal Service (USPS) and the need for improving internal controls and protecting revenue in an organization that takes in and spends billions of dollars each year; (2) key reform and oversight issues that continue to challenge USPS and Congress as they consider how U.S. mail service will be provided in the future; and (3) its ongoing work relating to labor-management relations at USPS and other issues. GAO noted that: (1) USPS reported that fiscal year (FY) 1996 represented the second year in a row that its financial performance was profitable and operational performance improved; (2) USPS's 1996 net income was $1.6 billion and it delivered 91 percent of overnight mail on time; (3) additionally, for FY 1996, USPS's volume exceeded 182 billion pieces of mail and generated more than $56 billion in revenue; (4) while these results are encouraging, other performance data suggest that some areas warrant closer scrutiny; (5) last year's delivery of 2-day and 3-day mail, at 80 and 83 percent respectively, did not score as high as overnight delivery; (6) the concern among customers is that USPS's emphasis on overnight delivery is at the expense of 2-day and 3-day mail; (7) additionally, although its mail volume continues to grow, USPS is concerned that customers increasingly are turning to its competitors or alternative communications methods; (8) in 1996, mail volume increased by about one-half of USPS's anticipated increase in volume; (9) containing costs is another key challenge that GAO has reported on previously; (10) GAO has also found several weaknesses in USPS's internal controls that contributed to increased costs; (11) USPS's continued success in both financial and operational performance will depend heavily on controlling operating costs, strengthening internal controls, and ensuring the integrity of its services; (12) the prospect that pending postal legislation may place USPS in a more competitive arena with its private sector counterparts has prompted congressional consideration of some key reform issues; (13) these issues include how proposed changes to the Private Express statutes may affect universal mail service, postal revenues, and rates; (14) another reform issue is the future role of USPS in an increasingly competitive, constantly changing communications market; (15) congressional oversight remains a key tool for improving the organizational performance of USPS; (16) one of the most important areas for oversight is labor-management relations; (17) despite the initiatives that have been established to address them, the long-standing labor-management relations problems GAO identified in 1994 remain unresolved; (18) the Government Performance and Results Act provides an important avenue for stakeholders in reaching a consensus for addressing such problems; and (19) also, USPS's automation efforts will continue to require the attention of both USPS and Congress to ensure that increased productivity and an adequate return on investments are realized. |
The National Park Service Organic Act of 1916 established the Park Service within the Department of the Interior to promote and regulate the use of the National Park System with the purpose of conserving the scenery, natural and historic objects, and wildlife therein and to leave them unimpaired for the enjoyment of future generations. The Park Service manages its responsibilities through its headquarters office located in Washington, D.C., seven regional offices, and 409 individual park units that are part of the system. Park unit types include national parks, such as Yellowstone and Great Smoky Mountains; national historic parks, such as Valley Forge and the Chesapeake and Ohio Canal; national battlefields, such as Antietam and Fort Necessity; national historic sites, such as Ford’s Theatre and Theodore Roosevelt’s birthplace; national monuments, such as Muir Woods and the Statue of Liberty; national preserves, such as the Yukon-Charley Rivers and Big Cypress; national recreation areas, such as Lake Mead and the Chattahoochee River, and national lakeshores, such as Sleeping Bear Dunes and the Apostle Islands. Funding for the Park Service is composed of two sources: (1) annual appropriations and (2) fees, donations, and other funding sources. Annual appropriations. The Park Service generally receives funding through annual appropriations acts, which provide funds used by park units or applicable entities, such as states and local governments, in the following five accounts: Operation of the National Park System. Provides base funding for the operation of park units and for Park Service-wide programs. This funding is used by park units for visitor services, park protection, and maintenance projects, among other things. National Recreation and Preservation. Supports programs that assist state, local, and tribal governments, and private organizations with outdoor recreation, preservation, and environmental compliance. Historic Preservation Fund. Provides grants to state, territorial, and tribal governments and certain private groups for preserving historical and cultural resources. Construction. Supports planning and implementation of major rehabilitation and replacement projects at park units, along with unplanned, emergency construction projects. Land Acquisition and State Assistance. Uses funding from the Land and Water Conservation Fund to support Park Service land acquisition activities and provide grants to state and local governments for the purchase of land for recreation activities. Fees, donations, and other funding sources. The Park Service collects and uses funds from fees, donations, and other funding sources. These include: Entrance fees and amenity fees. The Federal Lands Recreation Enhancement Act (FLREA) authorizes the Park Service to collect and use recreation fees, including entrance fees and amenity fees for certain equipment and services, such as campgrounds. Franchise fees and commercial use authorization fees. The National Park Service Concessions Management Improvement Act of 1998 (Concessions Act) authorizes the Park Service to collect and use certain fees from concessioners that operate businesses in park units. Specifically, the Park Service may collect and use franchise fees from concessioners who operate restaurants, lodges, and other business operations inside park units. These fees are generally assessed as a percentage of the concessioners’ total gross receipts. The Park Service also collects fees for commercial use authorizations, which include small-scale commercial activities, such as leading workshops or tours. Rents. The Park Service is authorized to collect and use certain rents. Through its leasing program, the Park Service leases buildings and associated property to businesses, individuals, and government entities. Donations. The Park Service is authorized by law to receive and use cash donations and in-kind donations from individuals, nonprofit organizations, and corporations. Examples of in-kind donations include artifacts or services provided by nonprofit partner groups on behalf of the Park Service. The Park Service is also authorized to develop agreements with nonprofit partner groups, known as friends groups and cooperating associations. In general, friends groups engage in fundraising efforts on behalf of individual parks units, while cooperating associations provide interpretive services for visitors and manage retail operations at parks and share a portion of their proceeds from these operations with park units. The Park Service also has a volunteer program authorized by the Volunteers in the Parks Act of 1969. Volunteers help with a variety of tasks at park units, including doing maintenance work and providing interpretive services to visitors. Other funding sources. Other funding sources include transportation fees the Park Service collects to operate public transportation systems in park units; rents collected for employee housing; and funding from the U.S. Department of the Treasury for certain pension payments for United States Park Police annuitants. According to our analysis of OMB MAX data, total funding for the Park Service increased in nominal dollars from $2.7 billion in fiscal year 2005 to $3.1 billion in fiscal year 2014 (15 percent), as shown in table 1. However, when adjusted for inflation, total funding for the Park Service declined by 3 percent during this period. For fiscal years 2005 through 2014, the largest component of funding for the Park Service was its annual appropriations, which comprised 88 percent of its total funding on average, with fees, donations, and other funding sources comprising the remaining 12 percent. Over time, there has been some growth in the proportion of total Park Service funding that fees, donations, and other funding sources comprise (see fig. 1). Annual appropriations increased by 9 percent overall from fiscal year 2005 through fiscal year 2014 in nominal terms but declined by 8 percent after adjusting for inflation (see fig. 2). A large increase in appropriations came in fiscal year 2009, when the American Recovery and Reinvestment Act provided an additional $750 million to the Park Service. Since fiscal year 2010, annual appropriations for the Park Service have generally declined. Park Service officials told us that flat or declining appropriations have made it difficult to cover increases in salary and expenses for agency employees and to address the agency’s growing maintenance backlog. In addition, the number of park units in the system has been growing, and some Park Service officials said that this increase in units meant that the agency’s appropriations had to be divided among an increasing number of units. In 2006, there were 390 park units, and there are 409 park units as of November 2015. Fees, donations, and other funding sources grew 64 percent in nominal terms from fiscal year 2005 through fiscal year 2014 and have increased in most years, as shown in figure 3. Even after adjusting for inflation, funding from these sources increased by 39 percent during this period. Recreation fees, commercial service fees, and cash donations comprised on average 74 percent of the total fees, donations, and other funding sources for the Park Service for fiscal years 2005 through 2014. Park Service officials told us that these three revenue sources provided important support to park units. For example, from 2005 through 2011, recreation fees funded an estimated 8,575 projects at park units, including maintenance and other projects that enhanced the visitor experience, according to a 2012 report on the implementation of FLREA. In addition, during that time period, the Park Service used revenue from franchise fees to reimburse numerous concessioners that improved facilities in park units. Further, philanthropic donations enabled some parks to complete projects, such as improving trails or rehabilitating visitor centers. Some Park Service officials voiced concern to us that fees and donations could be viewed as a substitute for annual appropriations. They said that these sources are to be viewed as a supplement to annual appropriations but not a replacement. Revenues from recreation and commercial service fees and donations from philanthropic sources grew from fiscal year 2005 through fiscal year 2014. Specifically, revenues from recreation fees increased 26 percent during the period, while revenue from commercial service fees nearly tripled. Meanwhile, cash donations from philanthropic sources have fluctuated, while volunteer support has consistently increased. According to our analysis of Park Service data, revenues from recreation fees increased from about $148 million to about $186 million (26 percent) during the period we examined, as shown in figure 4. With the exception of fiscal years 2009, 2010, and 2013, revenues from recreation fees increased over the prior year. Revenues from recreation fees are comprised largely of entrance fees and amenity fees for equipment, services, and facilities, such as campsites. Revenues from entrance fees have been higher than amenity fees from fiscal year 2005 through fiscal year 2014, accounting for about 77 percent on average of the total recreation fees collected, with amenity fees on average accounting for the remaining 23 percent. Although recreation fee revenues have been generally increasing, the number of park units collecting entrance fees has declined from fiscal year 2008, the first year for which data are available according to Park Service officials, through fiscal year 2015. The number of park units charging amenity fees has remained largely constant over this period (see table 2). The decision to charge recreation fees involves individual park units, which make proposals to charge fees, and regional and headquarters officials who approve these fees. According to Park Service guidance, one of the guiding principles of the program is that parks will not collect recreation fees if the cost of collection exceeds the amount of revenue generated. For example, some park units do not have many visitors, so the administrative costs of charging these fees, which include paying staff to collect them, purchasing cash registers to process them, and hiring an armored car service to deposit them, can outweigh the anticipated revenues. Even with the decline in the number of parks collecting recreation fees, revenues grew over the period we examined, largely because the fees collected at a small number of parks account for a large portion of the fees collected. In fiscal year 2014, five national parks—Grand Canyon, Yosemite, Yellowstone, Rocky Mountain, and Zion—accounted for $59.6 million or about one-third of the total recreation fees collected that year. In addition, according to our analysis of the Park Service data, the top 50 park units collected about 88 percent of the total recreation fees collected in fiscal year 2014. According to our analysis of Park Service data, revenues from commercial service fees and rents almost tripled during the period, growing from almost $33 million in fiscal year 2005 to about $95 million in fiscal year 2014, as shown in figure 5. Revenues from franchise fees, rents, and commercial use authorization fees all grew during this period. Specifically, revenues from franchise fees almost tripled, growing from about $29 million in fiscal year 2005 to about $85 million in fiscal year 2014. In addition, rents grew from about $2 million in fiscal year 2005 to almost $8 million in fiscal year 2014, and fees from commercial use authorizations more than doubled from about $988,000 in fiscal year 2005 to about $2 million in fiscal year 2014. The Park Service generates the vast majority of its commercial service revenues from franchise fees associated with concession contracts. Specifically, these fees accounted for about 90 percent of all commercial service revenues in fiscal year 2014. A small number of large concessions contracts accounted for the majority of these fees. According to Park Service officials, data from concessioners’ 2014 annual financial reports showed that 51 of the agency’s approximately 500 concessions contracts generated 84 percent of all franchise fees paid to the Park Service in fiscal year 2014. Leases generated about 8 percent of fiscal year 2014 commercial service revenues, and commercial use authorizations generated about 2 percent. The growth in commercial service revenues can be attributed in part to an increase in the franchise fee rates that concessioners are paying to the Park Service. According to Park Service officials, franchise fees for individual contracts have increased an average of 2.4 percentage points as they have turned over and been awarded under the terms of the new Concessions Act. In addition, our analysis of Park Service data showed that the number of park units collecting commercial service fees has grown. In fiscal year 2005, 146 parks reported collecting at least one type of commercial service fee; the number grew to 176 by 2014. According to Park Service officials, growth in commercial service revenues can also be attributed to improved economic conditions, which has led to increased visitation levels at park units and higher gross receipts for concessioners. Cash donations to the Park Service did not have a discernible trend from fiscal year 2005 through fiscal year 2014, and there was considerable fluctuation in donations during this period (see fig. 6), ranging from $19.5 million in fiscal year 2011 to $94.7 million in fiscal year 2014. According to a senior Park Service official, this variation is largely due to donors contributing to large projects at particular park units that attracted the attention of donors in certain years. For example, $65.5 million of the $94.7 in total cash donations (69 percent) the Park Service received in fiscal year 2014 is from donations to renovate the museum and build a new visitor’s center at the Gateway Arch in St. Louis, which is part of the Jefferson National Expansion Memorial. Park Service officials also pointed out that economic factors play a role in influencing donation trends, noting that donations declined after the 2008 U.S. recession. Cash donations to the Park Service come from a variety of sources, including friends groups and cooperating associations. These groups also provide in-kind donations to the Park Service, which include artifacts donated or services provided to the agency. In general, we found that total donations from friends groups—both cash and in-kind donations— rose from about $68 million in fiscal year 2005 to about $147 million in 2011, the latest year for which data are available according to Park Service officials. Regarding donations from cooperating associations, Park Service officials and cooperating association officials told us that donations have generally been increasing, although this trend has been tempered by declining book sales nationwide, which have traditionally been a key source of cooperating associations’ revenues at the stores they operate. According to Park Service data, volunteer support for the agency has increased steadily. Specifically, the number of volunteers increased from about 132,000 in fiscal year 2005 to about 248,000 in fiscal year 2014, and the estimated cash value of their work increased from about $93 million to about $155 million during this period. The number of volunteers increased every year, except for fiscal year 2013. According to a senior Park Service official, the decline in the number of volunteers that year was attributable to Hurricane Sandy and the 2013 federal government shutdown, both of which disrupted the typical operations of the Park Service. The Park Service has efforts under way to increase revenues from recreation and commercial service fees along with philanthropic donations. However, certain factors complicate these efforts and limit the agency’s ability to increase revenue from these sources. In 2014, the Park Service encouraged park units that were collecting recreation fees to increase them, and 111 park units subsequently elected to do so, as of September 2015. Park Service officials told us that parks are expected to examine their amenity fees each year; however, there are no plans to regularly reassess entrance fees. The Park Service’s ability to increase funding from recreation fees is also limited by legislation and park-specific characteristics. The Park Service has recently increased entrance and amenity fees at several parks. In August 2014, the Director of the Park Service issued a memorandum that ended a moratorium on entrance fee increases that had been in place since 2008 and updated the agency’s entrance fee rate schedule for the first time since 2006 (see table 3). Like the 2006 fee schedule, the updated schedule separates the park units that collect entrance fees into four groups by type of park unit. If adopted, these fees represent an increase of 20 to 50 percent in most instances over the 2006 fee schedule, depending on the type of park unit and type of entrance fee being charged. Park Service officials told us that the agency estimates that $58 million in additional entrance fees could be generated if all parks charging entrance fees followed the schedule and visitation is not impacted by the increase in fees. The Park Service gave discretion to the park units to decide whether to increase entrance fees, and the memo states “if there is significant public controversy, a park may choose not to implement new fees, may phase in the new rates over three years, or delay the new rates until 2016 or 2017.” To assess public reaction to proposed fee changes, the memorandum encouraged park units to conduct public outreach in late summer or fall of 2014. According to Park Service officials we interviewed, park units used different methods to conduct this outreach, including using social media, holding public meetings, and meeting with congressional delegations. According to the memorandum, once public outreach was complete, each park unit that chose to proceed with a rate change had to submit their proposed change to regional and headquarters offices for approval. Of the 130 park units that charged an entrance fee in 2014, 92 park units proposed increasing the per person entrance fee, and 60 park units proposed increasing the per vehicle entrance fee, and all of them received approval to do so. For those increasing fees, a number of park units chose to charge less than the revised entrance fee schedule by 2017. Specifically, of the 92 park units that received approval to increase fees for individuals, 59 (64 percent) will be charging an entrance fee for individuals in line with the proposed schedule; the remainder elected to charge less than what the schedule recommends. A senior Park Service official told us that discretion was given to the parks under agency guidance about whether to follow the fee schedule, and fee increases needed to be supported by the public. According to the memorandum establishing the new entrance fee schedule, “the goal (if supported by civic engagement) is for all entrance fee parks to align with the standard rate for their group by 2017.” However, several park units that are collecting some type of entrance fee in 2015 did not increase entrance fees, and may not align with the fee schedule by 2017, unless they undertake efforts to do so. In addition, Park Service does not require park units to provide information supporting their decisions on not increasing entrance fee rates or increasing their fees by less than the fee schedule. According to a senior Park Service official, providing this information was not required because it was not compulsory that park units increase their fees. However, Federal Internal Control Standards state that for an agency to run its operations, it must have reliable and timely communication and that information is needed throughout the agency to achieve its objectives. By not requiring that parks provide information on decisions that deviate from the fee schedule, the Park Service may not have relevant information that would help to manage changes to recreation fees more effectively and ensure that park units are taking steps to determine whether entrance fees are set at a reasonable level. Regarding amenity fees, the August 2014 Park Service Director’s memorandum encouraged park units that charge amenity fees to examine them to determine whether they should increase. In order to increase these fees, the memorandum directs park units to conduct studies to compare the amenities offered in their parks and associated fees with those offered by private businesses in the surrounding area. Of the 131 park units that charged amenity fees in 2014, 55 park units received approval from Park Service headquarters to increase their amenity fees. Many of these parks received headquarters approval to increase fees at campgrounds. For example, Whiskeytown National Recreation Area in California received approval to increase fees $5 a night in most cases for recreation vehicle, tent, and horse camping. In general, Park Service officials told us that they expect parks to conduct comparability studies on an annual basis to see if amenity fees should be raised as part of the annual process to request changing fees, which is laid out in Park Service guidance. Unlike amenity fees, Park Service officials told us there were no plans to periodically review entrance fees to see if they should be increased. Our 2008 guide on federal user fees states that if federal user fees are not reviewed and adjusted regularly, federal agencies run the risk of undercharging or overcharging users. Moreover, Park Service guidance directs the agency to ensure its fees are set at a reasonable level, but this guidance does not direct that these fees be periodically reviewed. In a 2015 report, the Department of the Interior Inspector General recommended that the Park Service establish intervals for periodic reviews of its entrance fees to ensure that the fee schedule remains up to date. Park Service officials told us they were hesitant to commit to such reviews until FLREA is reauthorized because they were unsure if they would continue to have the authority to continue charging entrance fees. However, Park Service has not required periodic reviews of entrance fees for the 11 years that FLREA has been in place. By not regularly reviewing its entrance fee schedule, the Park Service is missing an opportunity to ensure that these fees are reasonable. The Park Service’s ability to further increase revenues from recreation fees is limited by legislation and park-specific characteristics. Legislation. According to Park Service data, 58 park units are prohibited by law from charging entrance fees. For example, the Alaska National Interest Lands Conservation Act prohibits the Park Service from charging entrance fees at all park units in Alaska. FLREA prohibits the Park Service from charging entrance fees at parks that lie within the District of Columbia, and the National Parks and Recreation Act of 1978 prohibits the Park Service from charging entrance fees at Point Reyes National Seashore. Park Service officials also said there were limits to how much the agency could raise fees because FLREA requires that the agency consider the impacts of fees on visitors, and the Park Service did not want to dissuade visitors from coming to parks. For example, there was opposition from the public to implementing entrance fees at additional areas of the Chesapeake and Ohio (C&O) National Historical Park and, in response, park officials withdrew this proposal in February 2015. FLREA also directed the Secretary of the Interior to establish an interagency pass that covers entrance fees and certain amenity fees for all federal recreational lands. The price of the pass is $80 annually, as of October 2015, and covers national park units, as well as recreational lands managed by U.S. Forest Service, Bureau of Reclamation, Bureau of Land Management, and the U.S. Fish and Wildlife Service. However, FLREA limits agencies’ ability to increase revenue from recreation fees. For example, the law prohibits charging entrance fees to persons under 16 years of age. FLREA also requires the Secretary of the Interior to offer a lifetime interagency pass for a one-time $10 price to senior citizens, defined as being over 62 years of age, and requires that free interagency passes be made available to permanently disabled people. While under FLREA, the price of the annual interagency pass can be changed by the agencies that administer it, the law does not provide this flexibility for the $10 lifetime senior pass, the free annual pass for disabled individuals, or free entry for those under age 16. Interior’s Inspector General found that this aspect of FLREA “hampers agencies’ flexibility and their ability to make business decisions” and concluded that free and substantially discounted passes represent missed opportunities for revenue. The price of the senior pass has been $10 since 1993, but a bill introduced in September 2015 would increase this price to a one- time amount matching the price of the annual interagency pass, which is $80, as of October 2015. If this occurred, it could generate about another $35 million in revenue annually, assuming that the same number of passes was sold, as in fiscal year 2014, which was about 500,000. Because of the limitations in FLREA, Park Service and the other agencies that administer the recreation fee program do not have the flexibility to periodically reassess and change the price of the lifetime senior pass. Providing this flexibility to these agencies would allow them to consider adjusting fees periodically, which is consistent with our guide on federal user fees. Park-specific characteristics. At some park units, collecting recreation fees is precluded by the configuration of the parks or is not economically advantageous. For example, some park units have many entry points, which make the logistics of collecting entrance fees difficult, according to Park Service officials. In addition, as previously described, at some park units, it may not be economically advantageous to collect recreation fees at some park units. For example, at parks with few visitors, the costs of administering the fee collection program would be a significant portion of the total fees collected, and these parks may not choose to charge an entrance fee. Other park units—for example, national historic sites—may not offer amenities, such as campsites, for which the Park Service could charge fees. The Park Service increased revenues from commercial service fees from fiscal year 2005 through 2014, in part by increasing the franchise fees that concessioners are to pay and by taking steps to make certain contracts more attractive to potential bidders. However, several factors complicate these efforts. The Park Service increased revenues from commercial service fees from fiscal year 2005 through fiscal year 2014, in part by increasing the minimum franchise fees included in contract prospectuses. According to Park Service officials, the agency increased the minimum franchise fee by modifying the process they used to develop contract prospectuses, which describe the services concessioners are to provide, any investments required for the operation, such as maintenance or equipment, and the minimum franchise fee, among other things. After a prospectus is issued, potential concessioners submit bids that include, among other things, the franchise fee they agree to pay if they are awarded the contract—which has to be equal to or higher than the minimum in the prospectus—along with details about the services they propose to provide. Before 1998, when the Concessions Act was enacted, the Park Service set minimum franchise fees based on limited financial reviews of concessioners’ financial statements, according to Park Service officials. After 1998, Park Service officials told us that they started hiring hospitality consultants to help them with the financial aspects of contract prospectuses to meet the requirements of the new act. Specifically, the agency started working with hospitality consultants to conduct in-depth financial analyses to develop minimum franchise fees for the Park Service’s largest contracts—those with gross revenues above $5 million. These analyses involve estimating concessioners’ costs and anticipated revenues and comparing the estimated profitability of the concessions operations with industry standards. Park Service officials and hospitality consultants told us that these more sophisticated analyses allowed the agency to better estimate the franchise fees that concessioners could pay while still having a reasonable opportunity for profit, as required by the Concessions Act, which led to higher franchise fees. Park Service officials also made efforts to increase revenues from commercial service fees by working to make concessions contracts more attractive and increasing competition among potential bidders. Park Service officials told us that increased competition among potential concessioners generally results in higher franchise fees in winning contracts. Our analysis of Park Service data also found that increased competition was associated with higher franchise fees. Specifically, based on our analysis of 25 large contracts awarded between fiscal years 2005 and 2014 we found that the Park Service earned an average of 5 percent in franchise fees from contracts that attracted 1 or 2 bidders and an average of 16 percent in franchise fees from contracts that attracted 3 or more bidders. One of the ways that Park Service officials said they tried to make concessions contracts attractive to more potential bidders was by reducing the amount of Leaseholder Surrender Interest (LSI) that had accumulated under certain contracts. LSI generally represents the depreciated value of capital improvements made by a concessioner to a property, such as building a new structure or completing a major rehabilitation. If a contract is awarded to a different concessioner when the contract ends, the Concessions Act requires the previous concessioner to be reimbursed for any LSI. The previous concessioner may be reimbursed by the new contract holder or the Park Service. Park Service officials and concessioners we spoke with said that LSI can create barriers to competition because few companies have the resources to reimburse the previous concessioner. Park Service officials told us that they chose to reduce the LSI associated with certain contracts because these contracts would otherwise have attracted few bidders. For example, in 2014, the Park Service spent almost $100 million reducing LSI to increase competition for a large concessions contract at Grand Canyon National Park. The Park Service initially invested $19 million in LSI payments. However, even with this reduction in LSI, the Park Service did not receive offers that met the terms of its first three prospectuses, and agency officials said that the level of LSI remained a barrier to potential bidders. As a result, the Park Service paid an additional $81 million to further reduce LSI. The Park Service received multiple bids on the fourth prospectus for this contract. Park Service officials told us that they plan to award this contract by January 1, 2016. In addition to reducing LSI associated with certain contracts, the Park Service has limited the amount of LSI that potential bidders can incur in new contracts, according to Park Service and concessioner officials we spoke with. Park Service officials told us that limiting LSI could reduce start-up costs associated with future contracts since new concessioners would not have to reimburse previous concessioners for accumulated LSI. However, some concessioners told us that limiting LSI could lead to lower levels of investment in concessioner-run properties, since concessioners may be less likely to make capital improvements if they are not reimbursed for these investments. This, in turn, could contribute to additional asset degradation and increased future maintenance costs, according to some concessioners we spoke with. Park Service officials also have looked for opportunities to increase revenues from leases and commercial use authorizations. Specifically, the Park Service hired a national leasing manager in 2015 to formalize its leasing program, and some parks units and regions have developed active leasing programs. For example, from 2009 through 2014, the Northeast region increased the number of leases from 25 to 76. As a result, the region more than doubled the revenue it generated from rents and payments made in lieu of rent, which increased from almost $14 million to $38 million during this time period. According to regional Park Service officials, the region increased its leases by increasing the number of full-time leasing positions and by hiring staff with real estate expertise to help advise parks on developing leases and perform market studies to set rental rates. The Park Service also has developed a new policy that has the potential to increase revenues from commercial use authorizations, according to Park Service officials. Traditionally, fees for commercial use authorizations were set to recover costs that park officials incurred administering the program. According to several park unit officials we spoke with, these fees ranged from $100 to $350 per permit. In 2015, the Park Service developed guidance that allows park officials to charge businesses a fee based on a percentage of gross receipts or a fee that is sufficient to cover administrative and management costs incurred issuing these commercial use authorizations—whichever is more. For example, the new guidance allows park officials to charge recreation service providers that generate less than $250,000 in annual gross receipts the greater of either 3 percent of gross receipts or $500. Some businesses operating under commercial use authorizations generate significant revenues, sometimes hundreds of thousands of dollars, according to Park Service officials. As a result, officials said that this shift has the potential to increase revenues because the resulting fees would be higher than the flat fees that have traditionally been charged. Several factors complicate the Park Service’s efforts to increase revenues from commercial service fees. In particular, officials noted that the Park Service is required by law to balance a number of priorities. Specifically, under the Concessions Act, accommodations, facilities, and services offered under a concessions contract must be consistent to the highest practicable degree with the preservation and conservation of park units they are proposed to operate in, provide a reasonable opportunity for profit to concessioners, and offer reasonable rates for facilities and services to the public. The law does not require the Park Service to maximize franchise fees; instead, it states that franchise fees are a lower priority than protecting, conserving and preserving park units or providing necessary and appropriate facilities and services to visitors at reasonable rates. As a result, Park Service officials told us that increasing revenues from franchise fees can be challenging. The Park Service’s efforts to increase revenues from commercial service fees also have been affected by limited competition for some concessions contracts. Our analysis of Park Service data found that 32 percent (8 of 25) of the Park Service’s largest contracts—those generating $5 million or more—awarded between fiscal years 2005 and 2014 attracted one bidder. These 25 contracts generated about 45 percent of the $65 million in franchise fees collected by the Park Service in fiscal year 2013, the most recent year for which data are available. Park Service officials told us that, of these 25 contracts, 2 contracts for lodging services initially received no bids. In addition, some parks offer limited opportunities for revenue generation, which may in turn limit the number of bidders. For example, Park Service officials told us that some park units are located in remote locations that attract few visitors or have short tourist seasons, which limits the potential profitability of these contracts. According to Park Service officials, they are pleased to receive one bid in such cases. Adjusting a concessions contract to provide additional services, which could increase revenues for concessioners and the Park Service, can be a lengthy process, according to some concessioners we spoke with. For example, one concessioner that provides transportation services at a park told us that his company proposed increasing the number of park visitors transported per day to levels consistent with the park’s management plan. This proposed change took 20 months to be reviewed and approved by the Park Service, which, according to the concessioner, resulted in lost revenue for both the concessioner and Park Service. Another transportation concessioner told us about similar challenges adjusting his company’s operating plan to increase service hours by 30 minutes. Although such a change in operating hours is generally the purview of the park superintendent, according to the concessioner, the change took 5 years to be approved due to turnover in park leadership and park budget constraints. Park Service officials told us that they act as quickly as possible when they receive requests to change or increase concessioners’ services but, in some cases, it takes time to collect and assess the necessary information to make an informed decision. For example, according to Park Service officials, the agency considers the impact of the proposed change on park operations and on other business operations in the park. Park Service officials also told us that they are developing guidance to establish factors to consider for adding services to help park unit staff when considering concessioners’ requests. Proposed factors include whether the proposed services complement the terms of the current contract and whether there will be environmental impacts at the park unit from these services. In addition, concessioners must be fully compliant with their current contracts before requests to add services may be considered. As of August 2015, this guidance was under development, and agency officials were uncertain when it would be issued. Efforts to increase revenues from leases and commercial use authorizations also face challenges. According to Park Service officials, leasing opportunities can be limited because some park units do not have buildings available to lease or the facilities they have are not suitable for leasing due to their poor physical condition. In addition, park officials at two parks we visited told us that they do not have sufficient staff to manage a leasing program. For commercial use authorizations, some park officials we spoke with said that changing their fee structure to one based on a percentage of business owners’ gross revenues could pose a financial burden to smaller businesses that have low profit margins yet provide important services for park visitors. For example, park officials at one park told us that some businesses have held commercial use authorizations to deliver portable toilets, which they said is an essential service during large events. These officials told us that these businesses have relatively low profit margins and may be unwilling to operate in parks if they had to pay higher fees. To increase philanthropic donations, the Park Service is leveraging opportunities arising from its centennial anniversary in 2016, adjusting relevant policies, helping to increase the fundraising capacity of its nonprofit partners, and training its own staff on ways to collaborate more effectively with nonprofit partners. Several factors limiting the Park Service’s ability to increase philanthropic support include attractiveness of certain projects needing donations, limited capacity to manage volunteers, and lack of detailed information on donations. To increase philanthropic donations, the Park Service is leveraging opportunities arising from its centennial anniversary in 2016. The Centennial Campaign has two primary efforts—fundraising and public outreach—both of which are being conducted in collaboration with the Park Service’s congressionally chartered nonprofit partner, the National Park Foundation (Foundation). Fundraising. The Park Service and the Foundation have launched a major fundraising campaign, which aims to raise $250 million in donations by 2018. These funds will be used to support 100 projects that protect resources, connect visitors with the parks, and develop the next generation of park stewards. For example, in the area of protecting resources, projects include rehabilitating Constitution Gardens in Washington, D.C., and restoring an area of large Sequoia trees to a more natural state in Yosemite’s Mariposa Grove. Public Outreach. In collaboration with the Foundation, the Park Service launched the “Find Your Park” campaign in 2015. This public outreach effort is designed to encourage Americans to visit park units, generate interest in parks, and help raise financial and in-kind support for park units. This effort uses social media and disseminates marketing materials online and in park units. The Park Service has partnered on this campaign with certain major donors, which each made at least $500,000 in contributions to the Foundation to support the campaign. Figure 7 shows examples of posters developed for the campaign to be displayed in park units and also used in online and print advertising. The Park Service also has allocated about $49 million in funds appropriated by Congress for the centennial anniversary. Congress has also appropriated funds to be used for matching grants. Specifically, Congress appropriated $25 million in 2008, $15 million in 2010, and $10 million in fiscal year 2015 to match philanthropic donations dollar-for- dollar to fund projects in park units. To be considered, proposed projects are to benefit one or more Park Service areas and have matching donations of at least 50 percent of project costs, according to the Park Service. Agency officials have prioritized projects with higher rates of matching donations, and considered the timeliness of donations, readiness of projects, and whether proposed projects address centennial and service-wide goals, such as high-priority deferred maintenance. Examples include rehabilitating the underground Franklin Court Museum at Independence National Historical Park and reconstructing roads, parking, walks, signs, and pedestrian areas to meet park road standards, accessibility standards, and historical context at Roosevelt Arch in Yellowstone National Park. Since 2014, the Park Service has also been revising its policies related to philanthropy to help increase donations. Specifically, the Park Service has been revising Director’s Order 21—the Park Service’s main policy governing donations and fundraising. Some nonprofit partners told us that this guidance lengthened the donations process in the past by requiring significant Park Service review. Park Service documentation suggests that, when approved, the revised Director’s Order 21 will likely shift greater authority to regional directors and superintendents to accept donations and approve fundraising agreements. For example, current policy allows regional directors to approve donations less than $1 million. The revised policy, which will likely be signed in early 2016, may allow the Park Service Director to delegate approval authority to regional directors for donations up to $5 million. The Park Service is also revising Director’s Order 7, which addresses volunteering, and Reference Manual 32, its internal guidance on cooperating associations and is planning to complete these revisions in late 2015. According to Park Service officials involved with these revisions, the goal of these revisions is to emphasize the importance of collaboration between Park Service and its partners. In addition, in January 2015 the Park Service temporarily waived three parts of its policies to help with centennial fundraising efforts. First, Park Service policy generally prohibits naming park assets as a form of donor recognition. For example, buildings, vehicles, and park features are not to bear donor names. Park Service waived this policy with regard to certain donations for items including benches, bricks, motor vehicles, and rooms in buildings—although buildings themselves are still prohibited from bearing donors’ names. Second, Park Service policy generally prohibits donor recognition from including corporate logos in park units. The Park Service waived this policy to allow corporate logos to be included on vehicles under certain conditions. Figure 8 provides an example of a corporate logo displayed on a Park Service vehicle. Third, the Park Service also issued a waiver allowing it to advertise with an alcohol company. All three of these waivers will be effective until 2017, when Park Service will reevaluate them to determine if they should remain in place, according to a Park Service official. The Park Service has also sought to increase philanthropic donations by encouraging the Foundation to expand and by helping friends groups increase their fundraising capacity. Specifically, the Park Service encouraged the Foundation to restructure and expand its staffing to better align with the current philanthropic practices. Since 2008, the Foundation has added 50 people and created three offices focused on corporate giving, private giving, and marketing. According to Foundation officials, the Foundation plans to continue increased fundraising efforts after the centennial campaign. With regard to friends groups, the Park Service has begun training its own staff on ways in which they can collaborate more effectively with nonprofit partners on their fundraising efforts. Park unit officials we spoke with told us that they help friends groups with their member outreach by attending fundraising events to describe the park’s needs to potential donors. In addition, park unit officials work with friends groups to identify potential projects that need funds and that donors would likely support. Park Service officials told us that improving the fundraising capacity of friends groups is important since several new friends groups have been started in the past 10 years, and many have not yet developed fundraising skills. The Park Service has also taken steps to increase volunteerism. Specifically, the Park Service allocated an additional $2 million in fiscal year 2015 to pay for 70 new volunteer coordinators, known as Centennial Volunteer Ambassadors. According to the Park Service, these 1-year internship positions will be dispersed throughout the Park Service. These coordinators will be responsible for helping to design and coordinate volunteer training and service. They will also perform outreach to attract volunteers. According to Park Service officials, several factors hamper the agency’s ability to increase philanthropic donations. One factor they cited is the types of projects that need funding are not always attractive to donors. For example, routine maintenance on buildings or sewer systems may be less attractive to donors than large, visible projects, such as the construction of a visitor center. In addition, the location of some parks can limit their ability to obtain philanthropic support. For example, Park Service officials in one regional office told us that some friends groups have difficulty raising large sums of money because their parks are not near urban areas with large pools of potential donors. Similarly, some parks may not generate as much interest as larger, better known parks and may struggle to attract donors. Another factor that Park Service officials cited is some internal resistance to relying on outside funding sources. For example, Park Service officials told us that some agency employees have expressed concern about some efforts to increase philanthropic donations—particularly the recent temporary waiver on partnering with corporations, which they view as commercializing the parks. The Park Service also has limited capacity to manage volunteers. According to Park Service officials, volunteers provide essential support at many parks—including helping with maintenance projects and interpretative assistance—but their efforts must be managed. In addition, the number of people who want to volunteer at some parks outpaces the availability of staff to manage them. Park Service officials explained that some park units do not have dedicated volunteer coordinators and instead assign these tasks as collateral duties due to budget constraints. Park Service officials told us that if they were able to dedicate more staff hours to managing volunteers, they could increase the level of volunteer support the agency receives. The Park Service compiles data on cash and in-kind donations from friends groups and cooperating associations as part of their business practices, but these data have several limitations. For example: Certain data are outdated. The Park Service is delayed in compiling data on donations from friends group because the agency’s process relies on examining Internal Revenue Service (IRS) Form 990s submitted by friends groups, and these groups can request extensions in filing these forms. Certain data are incomplete. We found that some information was missing—specifically some years of data from the National Park Foundation and information on donations from smaller friends groups. For cooperating associations, we also found that data were missing for certain years. Some data lack specificity and hinder certain analyses. We were unable to determine the trends in cash donations and in-kind donations received from friends groups because the Park Service did not differentiate between cash and in-kind donations for all years. For cooperating associations, we also found that the Park Service had not disaggregated cash from in-kind donations provided by cooperating associations. The Park Service is developing a new data portal for philanthropic donations that may address some shortcomings we identified. Specifically, according to a Park Service official leading this effort, the portal is intended to collect information from all friends groups, not just the larger ones, in addition to gathering information from cooperating associations. In addition, the portal is to gather information on monetary as well as in-kind services provided, according to documentation describing the system. Further, information is to be collected on an annual basis as a way to improve the timeliness of data. The Park Service plans to provide training on the portal in the spring of 2016 to the philanthropic partners who will be expected to enter data using the portal, according to a Park Service official involved in this effort. This official also said the agency plans to develop measures to ensure the reliability of the data collected, but specific details on these measures are not yet available. In a time of constrained resources, recreation fees, commercial service fees, and philanthropic donations are becoming increasingly important to the Park Service. The Park Service has undertaken several efforts to increase funding from these sources, and from fiscal year 2005 through fiscal year 2014, funding from these sources increased by about 40 percent, after adjusting for inflation. However, the Park Service faces challenges in increasing revenues from these sources and may be missing additional opportunities to increase funding from recreation fees. In particular, since 1993, senior lifetime interagency passes have been sold for a one-time price of $10—a significantly lower price than the current $80 annual price for a regular annual interagency pass. Our past work on federal user fees has highlighted the importance of regularly reviewing these fees. However, unlike the annual interagency pass, FLREA does not permit Park Service or the other agencies that charge recreation fees to increase the price of the senior pass. Without the authority to adjust the price of the senior pass, the Park Service is limited in its ability to increase revenue from this recreation fee. In addition, when the Park Service updated its entrance fee schedule for the first time since 2006, several parks increased entrance fees, but the Park Service does not have guidance to periodically review these fees. Moreover, the Park Service does not require park units that choose not to follow its entrance fee schedule to provide information on these decisions. Without guidance to periodically review fees and direct the park units to provide information on deviations from the fee schedule, the Park Service may not ensure that its entrance fees are set at a reasonable level and may be missing opportunities to more effectively manage its fees. To increase the flexibility that Park Service has to change entrance fees, Congress should consider amending FLREA to give authority to the Park Service and the other four agencies that implement the recreation fee program—Bureau of Reclamation, Bureau of Land Management, the U.S. Fish and Wildlife Service, and the U.S. Forest Service—to adjust the price of a lifetime senior pass. To help improve its management of recreation fees, we recommend that the Secretary of the Interior direct the Director of the Park Service to take the following two actions: revise its guidance on recreation fees so that the agency periodically reviews its entrance fees to determine whether the fees are reasonable, and direct that park units provide information to headquarters on why they are choosing to not increase entrance fees or increase them by an amount less than the fee schedule. We provided a draft of this report to the Department of the Interior for review and comment. In its written comments, reproduced in appendix IV, the Department of the Interior generally agreed with our findings and concurred with our recommendations. Interior also noted that the Park Service is planning to address these recommendations. Specifically, in 2016, the Park Service is planning to revise its guidance on recreation fees to require periodic evaluation of the entrance fee pricing structure. In addition, beginning in 2016, Interior indicated the Park Service will require park units to provide information on their decisions to not increase entrance fees. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of the Interior, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were to examine (1) general trends in funding for the National Park Service (Park Service) for fiscal years 2005 through 2014; (2) the trends in the Park Service’s revenues from recreation and commercial service fees and donations from philanthropic sources for fiscal years 2005 through 2014; and (3) the Park Service’s efforts to increase fee revenues and donations, and factors, if any, that may affect these efforts. To examine general funding trends for the Park Service for fiscal years 2005 through 2014, we obtained and analyzed data on the Park Service from the Office of Management and Budget (OMB) MAX Information System (MAX). We selected this period because this was the most recent 10-year period for which data were available, and 10 years of data would allow us to identify any trends. Data reported in OMB MAX are subject to review and checks through OMB to help ensure consistency of the data, and we determined these data were sufficiently reliable for our purposes. We analyzed these data in both nominal and inflation adjusted terms. We also examined Park Service budget documents, including its annual budget justifications. To examine trends in fee revenues and donations for the Park Service for fiscal years 2005 through 2014, we analyzed OMB MAX data along with other data on these funding sources we received from the Park Service. We used the OMB MAX data to examine the trends at a national level, and we used the data from the Park Service to examine trends at the national and park unit levels. To determine the reliability of the Park Service’s data, we spoke with agency officials who were familiar with these data, reviewed relevant documentation, and compared these data to data from OMB MAX. We generally found these data to be sufficiently reliable for our purposes. The exception is the park unit level data on philanthropic donations, which we found to have certain limitations, and we identify these limitations in our use of these data. To examine the Parks Service’s efforts to increase fee revenues and donations and any factors that may affect these efforts, we examined relevant laws and Park Service documents, and interviewed agency officials. The laws we examined were the Federal Lands Recreation Enhancement Act for recreation fees; National Park Service Concessions Management Improvement Act of 1998 for commercial service fees; and laws governing Park Service’s relationships with friends groups, cooperating associations, and volunteers, such as the Volunteers in Parks Act of 1969. We also examined Park Service’s policy manual along with specific policy documents for the revenue streams we examined: Director’s Order 22 and Reference Manual 22A for recreation fees; Reference Manuals 48A and 48B for commercial service fees; Director’s Order 21 and Reference Guide to Director’s Order 21 for friends groups; Director’s Order 32 and Reference Manual 32 for cooperation associations; Director’s Order 7 for volunteers. We also reviewed the agency’s August 2014 memorandum that encouraged parks to consider increasing fees and memorandums that approved the fee increases that followed. We compared the laws and agency guidance and memoranda, as well as GAO’s design guide for federal user fees and Standards for Internal Control in the Federal Government, with documentation associated with Park Service’s efforts to increase recreation fees. For commercial service fees, we analyzed Park Service data on franchise fee rates, number of bidders, revenues of certain commercial use authorization holders, and certain lease payments. For philanthropic donations, we examined documents associated with Park Service’s centennial efforts. For all three objectives, we conducted interviews with Park Service officials at the headquarters, region, and park unit levels. At the headquarters level, we interviewed officials in the Chief Financial Officer’s Office, including those responsible for budgeting and overseeing the recreation fee program; officials in the Office of Commercial Services that are responsible for overseeing the commercial service program; officials in the Office of Partnerships and Civic Engagement that oversee relationships with friends groups; and officials in the Office of Interpretation, Education, and Volunteers that oversee cooperating associations and volunteers. At the regional level, we spoke with the regional directors in all seven of Park Service’s regional offices—Alaska, Intermountain, Midwest, National Capital, Northeast, Pacific West, Southeast—along with officials in these offices that help to manage the recreation fee program, commercial services program, and relationships with philanthropic partners. At the park unit level, we spoke with officials that are involved in managing 31 park units. Specifically, we interviewed officials at 23 park units in person and contacted officials from another 8 park units by phone and asked about their experiences with these funding sources. Table 4 in appendix II lists the parks that we contacted as part of our work. In selecting parks to contact, we included a range of parks, that varied by certain characteristics, including number of visitors and type (i.e., scenic vs. historical), and we interviewed officials from at least one park unit in all seven of the Park Service’s regions. We also spoke with a variety of stakeholders, including concessioners and nonprofit partners. We selected these stakeholders because of their affiliation with parks in our review or because they would be able to provide other perspectives on these revenue sources. For example, during some of our site visits, we met with concessioners and partners that were working with the park units we were visiting. The views from these interviews are not generalizable to all parks, concessioners, and nonprofit partners, but they were used to provide a range of perspectives on Park Service’s efforts. We also examined reports prepared in the last 10 years by Park Service and stakeholder groups on recreation fees, commercial service fees, and philanthropic donations. We conducted this performance audit from October 2014 to December 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 4 provides information on the national park units that we contacted as part of our work. The table describes how we contacted the park (either in person or by phone) along with background information on the park and data on recreation fees, commercial service fees, and philanthropic donations. The following figures provide a summary of selected information we collected during interviews with park officials and information we reviewed about individual parks’ recreation fees, commercial service fees, and philanthropic partnerships. Recreation fees include entrance fees and amenity fees for certain equipment and services, such as campgrounds. Commercial service fees include franchise fees, commercial use authorizations, and rents. The Park Service is also authorized to develop agreements with nonprofit partner groups, known as friends groups and cooperating associations. In addition to the individual named above, Elizabeth Erdmann (Assistant Director), Scott Heacock, Mary Koenen, and Sean Standley made key contributions to this report. Additional contributions were made by Penny Berrier, Antoinette Capaccio, Alexandra Dew Silva, Carol Henn, Paul Kinney, Ying Long, Armetha Liles, John Mingus, Alison O’Neill, Guisseli Reyes-Turnell, and Rebecca Shea. | The Park Service manages 409 park units that cover 84 million acres. Park Service funding is generally composed of annual appropriations along with revenues from recreation fees, commercial service fees, and philanthropic donations. GAO was asked to review the Park Service's collection of these fees and donations. This report examines the Park Service's (1) overall funding trends for fiscal years 2005 through 2014; (2) trends in revenues from fees and donations; and (3) efforts to increase revenues and donations, and factors that affected these efforts. To conduct this work, GAO analyzed budget data for fiscal years 2005 through 2014 on the Park Service's overall funding and fee revenue and donations. GAO also reviewed laws, examined Park Service reports, and interviewed agency officials and stakeholders, such as nonprofit partners and concessioners. The National Park Service's (Park Service) total funding did not keep pace with inflation for fiscal years 2005 through 2014, even as fees and donations increased. Total funding increased in nominal dollars from $2.7 billion to $3.1 billion (15 percent) during this period, but declined by 3 percent after adjusting for inflation. Annual appropriations, which comprised about 88 percent of total funding on average, declined 8 percent after adjusting for inflation. Fees, donations, and other funding sources, which accounted for the remainder, increased 39 percent after adjusting for inflation. Revenues from fees and donations grew for fiscal years 2005 through 2014 to varying degrees. Specifically, revenues from recreation fees, which include entrance and amenity fees for facilities such as campsites, increased from about $148 million to $186 million (26 percent). Revenues from fees from concessions operations, which comprise the vast majority of commercial service fees, nearly tripled from almost $29 million to $85 million. Meanwhile, cash donations from philanthropic sources fluctuated, ranging from $19.5 million in fiscal year 2011 to $94.7 million in fiscal year 2014. The Park Service has efforts under way to increase revenues from fees and donations, but certain factors limit these efforts. For recreation fees, the Park Service updated its fee schedule, and several park units increased entrance and amenity fees. However, the Federal Lands Recreation Enhancement Act (FLREA) does not give the Park Service and other agencies that charge recreation fees the authority to adjust the price of a lifetime senior pass, which has been $10 since 1993. GAO's guide on user fees states that federal agencies should regularly review fees and make changes if warranted. Without the authority to modify the price of the senior pass, the Park Service is limited in its ability to increase revenue from this fee. In addition, Park Service guidance on recreation fees directs the agency to ensure its fees are set at a reasonable level, but does not call for periodic reviews of these fees, and the agency has no plans to do so. The agency also does not require park units to provide information on decisions to not change their fees or deviate from the fee schedule because decisions about raising fees are left to the park units. As a result, the Park Service is missing opportunities to ensure that its entrance fees are reasonable. To increase commercial service revenues, the Park Service increased minimum franchise fees that concessioners pay, and some park units have developed leasing programs. Several factors, such as limited competition for some concessions contracts, complicate efforts to increase these fees. For philanthropic donations, the Park Service has launched fundraising and public outreach campaigns in conjunction with its centennial anniversary in 2016 and has modified fundraising policies to increase donation opportunities. According to agency officials, several factors hamper the agency's ability to increase donations, such as projects that need funding are not always attractive to donors. Congress should consider amending FLREA so that the federal agencies that charge recreation fees can determine whether to adjust the price of a senior pass. GAO also recommends that the Department of the Interior direct the Park Service to revise its guidance to periodically review entrance fees and direct park units to provide information on their decisions to not increase fees. Interior concurred with the recommendations. |
The three major programs that SSA administers—OASI, DI, and SSI—provide cash income support to diverse populations. The Social Security Act established the OASI program to protect workers and their dependents and survivors from wages lost due to retirement. DI, enacted in 1956, provides monthly cash benefits to disabled workers and their families. The OASI and DI programs are funded through payroll taxes and are based on the contributions of individual workers and their employers. About 90 percent of all U.S. jobs are covered by these insurance programs. In 1995, the OASI and DI programs paid over $326 billion in benefits to more than 43 million eligible beneficiaries. SSI, enacted in 1972, provides cash assistance to aged, blind, or disabled individuals with limited income and assets. The federal SSI program replaced federal grants to state-administered programs, which varied substantially in benefit levels. The Congress intended SSI as a supplement to the OASI and DI programs for those with little or no Social Security coverage. Federal SSI benefits are funded by general revenues and based on financial need. In 1995, over 6 million recipients received about $25 billion in federal benefits, including 2 million individuals aged 65 and over and almost 1 million children. To administer these three programs, SSA must perform the following essential tasks: issuing Social Security numbers to individuals; maintaining earnings records for workers by collecting wage reports from employers, which are used to determine the dollar amount of OASI and DI benefits; and processing benefit claims for all three programs. SSA must also determine which applicants for disability benefits under DI and SSI meet the federal definition of disability; for SSI, the agency must also determine applicants’ levels of income and assets. In addition, SSA performs many actions to maintain accurate records for program recipients once they are enrolled. Moreover, SSA must periodically conduct reviews of the health status of disabled beneficiaries to ensure that those no longer eligible are removed from the rolls. For SSI recipients, SSA must also review their financial status. Table 1 gives an overview of the three programs. At this time of heightened attention to the costs and effectiveness of all federal programs, the Congress and the administration have supported efforts to promote a more efficient federal government that is responsive and accountable to the public. This is especially critical at SSA because the agency deals with thousands of individuals daily and nearly 90 percent of its employees directly serve the public. SSA has surpassed many federal agencies in these efforts by assessing and improving its service to the public, gaining experience in managing for results, and emphasizing financial accountability. Two federal efforts, the Government Performance and Results Act of 1993 (GPRA) and the National Performance Review (NPR), promote cost-effective service delivery governmentwide. To ensure that it is meeting the needs of the public and the requirements of GPRA and NPR, SSA regularly seeks customer feedback through mail and telephone surveys, comment cards in its field offices, focus groups, and special studies. It has also taken steps to use this information to improve its services. As demand for its 800-number telephone service increased, for example, SSA found its service lacking. Customer feedback indicated that this convenient telephone service was important to the public, yet SSA’s performance data showed that in fiscal year 1995, the “busy” rate for the 800 number was almost 49 percent; only about 74 percent of callers were able to get through within 5 minutes of their first try. SSA set a goal of answering 85 percent of its 800-number calls within 5 minutes of a caller’s first try in fiscal year 1996 and made operational changes to increase public access. As a result, the busy rate decreased to 34 percent, and 83 percent of calls were answered within 5 minutes. For fiscal year 1997, SSA’s goal is to answer 95 percent of its 800-number calls within 5 minutes of a caller’s first try. We are currently reviewing SSA’s efforts to improve this service. SSA has also worked under GPRA to strengthen its strategic management process and to identify and develop performance measures to help its managers, the Congress, and the public assess how well it is accomplishing its mission. During SSA’s recently completed participation as a pilot agency under GPRA, SSA gained experience in developing specific and quantifiable annual performance goals and measures. On the basis of this experience, it expects to develop performance measures for fiscal year 1998 that focus more on outcomes and results than in previous years. For SSA’s fiscal year 1997 goals and performance measures, see appendix I. In addition, SSA is a leader among federal agencies in producing complete, accurate, and timely financial statements that promote accountability to taxpayers. For fiscal years 1995 and 1996, SSA issued audited financial statements 3 months before its legal mandate. Moreover, SSA was among the first federal agencies to produce an accountability report, which is designed to consolidate current reporting requirements under various laws and provide a comprehensive picture of an agency’s program performance and its financial condition. In addition, for fiscal year 1996, as a pilot project, SSA and its Office of Inspector General collaborated to further streamline financial reporting by including the Inspector General’s Semiannual Report to Congress as part of the Accountability Report. To be most effective, SSA’s ongoing efforts to cost effectively serve its customers and be accountable to taxpayers will need to be well coordinated and continually improved. The agency has taken steps to better integrate its strategic planning, performance measurement, and customer service efforts and to improve the ways that it collects and uses customer feedback. SSA faces challenges, however, as do all federal agencies, in integrating results-oriented management into its agency culture and daily activities. Moreover, SSA must determine how to balance its customers’ needs and expectations with those of taxpayers by assessing the cost-effectiveness of its customer service improvements. As the baby boom generation ages, growing numbers of people will receive Social Security retirement and survivors benefits through OASI in the years to come, as shown in figure 1. By the year 2015—as baby boomers begin entering their mid-60s—the numbers of individuals receiving benefits will reach an estimated 50.4 million: more than one-third greater than the 37.4 million people receiving Social Security retirement and survivors benefits in 1995. Once on the rolls, retirees can be expected to receive benefits for longer time periods than past recipients. A 65-year- old male who began receiving Social Security benefits in 1940—the first year SSA began paying monthly benefits—was expected to live, on average, about an additional 12 years. By 2015, a 65-year-old male will be expected to live about an additional 16 years—a 33-percent increase. During that same time period, the life expectancy for women aged 65 will increase by almost 50 percent—from an average of over 13 years to an average of nearly 20 years. Meanwhile, the ratio of contributing workers to beneficiaries will decline. By 2015, an estimated 2.6 workers will be paying taxes into the Social Security system per beneficiary; in 1950, 16.5 workers were paying Social Security taxes per beneficiary. This retirement explosion threatens the long-term solvency of the Social Security system. Beginning in 2012—15 years from now—program expenditures are projected to exceed tax income. By 2029, without corrective legislation, the trust funds are expected to be depleted, leaving insufficient funds to pay the expected level of OASI and DI benefits. (In an upcoming report, we will discuss in greater detail the issues affecting the major sources of retirement income—Social Security, private pensions, savings, and earnings.) Concerns about the long-term solvency of the Social Security system are fueling a public debate about the fundamental structure of this system. The Advisory Council on Social Security, for example, has put forth three different approaches to addressing the Social Security system’s long-term deficit. All three approaches call for some portion of Social Security payroll taxes to be invested in the stock market. Two of these approaches call for allowing individuals to invest some portion of their payroll taxes in individual retirement accounts. This would be a significant departure from the current program design, in which benefits are based on past earnings and trust fund moneys are invested and managed centrally. Given the magnitude of the financial problems facing the Social Security system and the nature of the proposals for changing the system, we can expect the debate over the financing and structure of the Social Security system to continue and intensify in the coming years. In our report on SSA’s transition to independence, we noted that the agency’s independence would heighten the need for it to work with the Congress in developing options for ensuring that revenues are adequate to make future Social Security benefit payments. SSA could, for example, analyze options and assess their possible effects on individuals and on SSA’s operations. Nearly 2 years after gaining independence, however, SSA is not yet ready to fully support policymakers in the current public debate on financing issues. SSA has acknowledged that it has not undertaken the policy and research activities it needs to examine critical issues affecting its programs, including long-term financing, and to provide support to policymakers. The agency recognizes the need to be more active in these areas and, in May 1996, took steps to reorganize and strengthen its policy analysis, research, and evaluation offices. It believes this reorganization will better position it to take a leadership role in critical policy and research issues related to its programs. By November 1996, SSA’s reorganized Office of Research, Evaluation, and Statistics had formed new links with outside experts to strengthen its research and evaluation capabilities. In addition, it had created an office to coordinate all policy planning activities. Although this is a positive first step, SSA officials also acknowledge that they are just beginning to focus on Social Security’s long-term solvency. SSA is in a unique position to inform policymakers and the public about the nature of long-term financing issues. Focus groups conducted by SSA have demonstrated that the public’s knowledge of Social Security programs is generally low and the public’s confidence in the Social Security system is undermined by its future financing problems. To address these issues, SSA is conducting a public education campaign that discusses what the current system offers in disability, retirement, and survivors benefits. It also emphasizes that the Social Security system can pay benefits for many more years and that the Congress has time to act before the trust funds are depleted. SSA, however, is not discussing options for maintaining or changing the current system. Feedback SSA has received from its focus groups indicates that addressing the public’s lack of knowledge without also discussing possible options for ensuring the system’s future solvency does not instill confidence and weakens the agency’s credibility with the public. We are concerned that SSA has not seized the opportunity as an independent agency to speak out on the importance of addressing the long-term financing issues sooner rather than later. As we have noted in our previous work, the sooner action is taken to resolve the future funding shortfall, the smaller the changes to the system need to be and the more time individuals will have to adjust their financial and retirement plans. In recent years, disability caseloads have faced unprecedented growth. To manage this caseload growth and the resulting slow processing times, SSA plans to redesign and dramatically improve its disability claims process. However, the scope and complexity of its many redesign initiatives risk the likelihood that SSA will accomplish its redesign goals. Moreover, while SSA is taking steps to improve the process for moving eligible individuals onto the disability rolls more quickly, it has not sufficiently emphasized helping beneficiaries return to work and leave the disability rolls. During the past decade, SSA has faced significant increases in caseloads and expenditures for its two disability programs—DI and SSI. DI and SSI caseloads and expenditures increased dramatically between 1986 and 1995, and the pace of this growth accelerated in the early 1990s. In 1986, 4.4 million blind and disabled people under age 65 received DI or SSI benefits; by 1995, this number had soared to 7.5 million—a 69-percent increase. As the number of DI and SSI beneficiaries increased, so did the amount paid in cash benefits. The combined DI and SSI cash benefits increased from $25 billion to $57 billion in 10 years. Adjusted for inflation, the increase in the value of these cash benefits was 66 percent. As these programs have grown, the characteristics of new beneficiaries have changed in ways that pose additional challenges for SSA. Beneficiaries are, on average, younger and more likely to have longer lasting impairments. Increases in beneficiaries with mental illness or mental retardation, especially, have driven this trend. Between 1982 and 1992, for example, mental impairment awards to younger workers increased by about 500 percent. This growing proportion of younger beneficiaries with longer lasting impairments means that the beneficiary population, on average, is likely to spend more time on the disability rolls. In 1992, for example, new DI awardees were, on average, 48 years old. Depending on the type of impairment that qualified them for benefits, these beneficiaries could spend nearly one-third of their adult lives on disability before reaching age 65. As more and more people have filed for disability benefits, SSA has been slow to process initial claims, and appealed case backlogs have grown. To manage the disability caseload growth, increase efficiency, and improve service to its customers, SSA has started a major effort to change how disability decisions are made. Making disability decisions is one of the agency’s most demanding tasks; it accounted for more than half of SSA’s total administrative budget—about $3 billion—in fiscal year 1995. Even so, many claimants face long waits for disability decisions. As of June 1996, the wait for initial decisions averaged 78 days for DI claims and 94 days for SSI claims, with an additional 373-day wait for appealed decisions. Overall, the current disability claims process is not meeting the needs of claimants, the agency, or taxpayers. To deal with these problems, in 1993 SSA formed a team to fundamentally rethink and develop a proposal to redesign the disability claims process. Efforts like SSA’s—business process reengineering—have been used successfully by leading private-sector organizations to dramatically improve their operations. In April 1994, we informed the Congress that the agency’s redesign proposal was its first valid attempt to address the fundamental changes needed to cope with disability workloads. At that time, however, we also cautioned that many implementation challenges would have to be addressed. These include new staffing and training demands, developing and installing technology enhancements, and confronting entrenched cultural barriers to change. SSA’s redesign plan, released in late 1994, had an extensive scope and complexity. It included 83 initiatives to be accomplished during a 6-year period (fiscal years 1995 to 2000), with 38 of these to be completed or into a research and development or testing phase by September 30, 1996. In a recent report on the implementation challenges SSA faces as it redesigns its disability claims process, we concluded that SSA’s disability redesign is proving to be overly ambitious. Undertaking many initiatives at one time is likely to limit the chances for success and has already led to implementation delays. Although SSA has begun many of its planned initiatives, none are complete and many are behind schedule. Consequently, SSA has not progressed as intended in determining whether specific initiatives will achieve their desired results. Without concrete and measurable results, stakeholder support is hard to maintain. SSA has faced significant challenges in implementing some of the more complex initiatives. For example, SSA considers technology vital to redesign; it has, therefore, undertaken a technology initiative to more fully automate the processing of disability claims. Completion of this initiative, however, has been delayed by more than 2 years due to software development problems and the need for additional testing to assess redesign changes. Another complex initiative involves consolidating two distinct jobs, federal claims representative and state disability examiner, into a new disability claim manager (DCM) position. SSA is considering the establishment of about 11,000 DCM positions in more than 1,350 federal and state locations, recruiting these DCMs from its current workforce. Before fully implementing the DCM position, SSA must first provide several critical support features, including technology enhancements and a simpler method for making disability decisions, that SSA does not expect to be available for several years. Moreover, SSA has struggled to resolve stakeholder disagreements among representatives of federal and state employees about this new position. SSA has determined that it will not decide to implement the DCM until valid and reliable testing demonstrates that the position is viable. Although SSA has focused on improving its processes for moving eligible claimants onto the disability rolls, it has placed little priority on helping them move off the rolls by obtaining employment. We have reported that SSA’s disability programs are out of sync with societal attitudes, as embodied in the Americans With Disabilities Act, that have shifted toward goals of economic self-sufficiency and the right of people with disabilities to fully participate in society. At one time, the common business practice was to encourage someone with a disability to leave the workforce. Today, however, a growing number of private companies have been focusing on enabling people with disabilities to return to work. In contrast, SSA’s programs lack a focus on providing the support and assistance that many people with disabilities need to return to work. Eligibility requirements, for example, focus on applicants’ inabilities, not their abilities; once on the rolls, beneficiaries receive little encouragement to use rehabilitation services. A greater emphasis on beneficiaries’ returning to work is needed to identify and encourage the productive capacities of those who might benefit from rehabilitation and employment assistance. Although the main reason for emphasizing returning to work is so that people maximize their productive potential, it is also true that an estimated $3 billion could be saved in subsequent years if only an additional 1 percent of the 6.6 million working-age people receiving disability benefits in 1995 were to leave the rolls by returning to work. SSA needs to develop a comprehensive return-to-work strategy that includes providing return-to-work assistance to applicants and beneficiaries and changing the structure of cash and medical benefits. As part of an effort to place greater priority on beneficiaries’ returning to work, we recommended that SSA identify legislative changes required to implement such a strategy. Although evaluating any SSA response to our recommendations would be premature, we will continue to assess SSA’s efforts to help beneficiaries return to work. SSA has also missed opportunities to promote work among disabled beneficiaries where it has the legislative authority to do so. In 1972, the Congress created the plan for achieving self-support (PASS) program as part of SSI to help low-income individuals with disabilities return to work.However, SSA has not translated the Congress’ broad goals for the PASS work incentive into a coherent program design. We recently reported that SSA needs to improve PASS program management, and the agency has taken steps to better manage the program in accordance with our recommendations. Limiting opportunities for fraud, waste, and abuse in government programs is essential to promoting public confidence in the government’s ability to wisely use taxpayers’ dollars. Moreover, problems in any one of the programs that SSA administers can undermine confidence in all of its programs. Recent media reports on SSI fraud and abuse have focused attention on SSA’s management of this program. Several of our recent reviews of the SSI program have shown that SSA’s oversight and management of SSI have been inadequate and that the agency is not aggressively pursuing opportunities to increase program efficiencies. Although quantifying the extent of fraud, waste, and abuse is difficult, we have repeatedly identified program weaknesses that SSA needs to address. This suggests more pervasive problems with SSA’s overall management of, and attention to, the SSI program. SSA has awarded SSI benefits, for example, to unknown numbers of non-English-speaking immigrants who are actually ineligible for SSI benefits. These awards are very costly to the government, accounting in each case for thousands of dollars in improper payments over the years. And even though individual SSA field offices have developed their own creative approaches to this problem, SSA’s programwide efforts for ensuring that only people who are eligible for SSI benefits receive them have been limited. SSA has also lacked an effective program to prevent erroneous payments to prisoners. Even though prisoners are ineligible for SSI if they have been in jail for 1 calendar month or longer, prisoners in many large county and local jail systems have received millions of dollars in cash benefits. This means that taxpayers have been paying twice to support these individuals—both for SSI benefits and the cost of imprisonment. SSA has begun to obtain information on current prisoners; however, it has not tried to develop information that would allow it to recover benefits paid to incarcerated or formerly incarcerated individuals who may have received benefits in prior years, although this information is available. In addition, SSA’s PASS program internal controls have been inadequate, compromising the program’s integrity. SSA’s internal program controls provide only limited guarantees that program moneys are being used appropriately and taxpayer dollars spent judiciously. For example, the lack of adequate guidance on acceptable PASS expenditures has resulted in inconsistent decisions on purchases. In one instance the proposed purchase of a $13,000 automobile was denied because the applicant did not provide sufficient evidence to justify the car’s cost; in other instances, however, purchases of similarly expensive vehicles were approved or less justification was provided for purchases. SSA is also missing opportunities to more efficiently administer the SSI program and to prevent or more quickly detect overpayments to recipients. Millions of dollars could be saved, according to our estimates, if SSA field offices had and used direct online access to computerized state income information during initial and subsequent assessments of eligibility. Although SSA has begun to develop and expand online access in several field offices, it has not aggressively sought to use this technology to reduce benefit overpayments. SSA acknowledges that it needs to do more to prevent and detect fraud, waste, and abuse. It has several initiatives under way to accomplish this, and we will be monitoring these efforts. In addition, the new SSA Inspector General’s Office, created when SSA gained independence from HHS, is increasing its emphasis on fraud and abuse. While SSA is grappling with policy and program challenges, it will also need to meet customer expectations in the face of growing workloads and reduced resources. SSA expects to redesign inefficient work processes and modernize its information systems to increase productivity, knowing that its customer service will deteriorate to unacceptable levels if it continues to conduct business as in the past. In addition, it faces the urgent need to complete year 2000 software conversion to avoid major service disruption at the turn of the century. SSA will also need to effectively manage its workforce and consider what service delivery structure will work best in the future. The need to effectively balance public service needs with costs will become even more important in the future. As the baby boom generation ages, more and more people will be applying for and receiving SSA program benefits. In addition to increasing retirement and disability caseloads, SSA’s other workloads will grow because of the following increasing responsibilities: SSA’s workloads over the next few years will increase substantially as a result of recent congressional efforts to overhaul the nation’s welfare system. The Congress has made changes that eliminate disability benefits for drug addicts and alcoholics, restrict noncitizens’ SSI benefit eligibility, and tighten the SSI eligibility criteria for disabled children. SSA will have to manage the large influx of appeals and reapplications that is expected following the changes in benefit eligibility. SSA has already received appeals for more than half of the over 200,000 drug addicts and alcoholics who were notified in June 1996 that their benefits would be terminated, according to SSA officials. These workloads will also have an impact on SSA’s capacity to meet other workload challenges. SSA must meet a legislative requirement that most workers be mailed annual statements of their earnings and estimated retirement benefits,called Personal Earnings and Benefit Estimate Statements. The creation and mailing of these annual statements to all workers aged 60 and older, begun in 1995, must be expanded to those aged 25 and older—about 123 million individuals—by the year 2000. We recently recommended that these statements be improved to more effectively communicate important information to the public; improving these statements could result in fewer inquiries about them, reducing the impact on SSA workloads. SSA has not fully met legislative requirements to periodically review the status of disabled beneficiaries to ensure that those who are no longer disabled are removed from the rolls. About 4.3 million DI and SSI beneficiaries were due or overdue for continuing disability reviews in fiscal year 1996. SSA now has plans to review the status of more than 8 million beneficiaries in the next 7 years. To accomplish this, SSA would have to conduct about twice as many reviews as it has conducted over the past 20 years combined. SSA knows that it must meet these increasing demands in an era of federal downsizing and spending reductions. In early 1996, SSA estimated that it would need the equivalent of about 76,000 work-years to handle its workloads by the end of the century if it conducted business as usual. It expected to handle this work with fewer work-years than it has today. SSA is in the process of revising these estimates. To handle increasing workloads and improve public service, SSA has begun to redesign inefficient work processes and develop supporting modernized information systems. SSA is in the process of a multiyear, multibillion dollar systems modernization effort expected to support new ways of doing business and improve productivity. SSA’s Automation Investment Fund of $1.1 billion supports its 5-year plan, from fiscal years 1994 to 1998, of moving from reliance on computer terminals linked to mainframe computers in its Baltimore headquarters to a nationwide network of personal computers. The new network is expected to improve productivity and customer service in field offices and teleservice centers and allow for further technology enhancements. Although this new computer network environment may yield productivity improvements, it poses significant challenges for SSA. The usefulness of new computer systems will depend on the software developed for them. Software development has been identified by many experts as one of the most risky and costly aspects of systems development. To mitigate the risk of failing to deliver high-quality software on time and within budget, SSA must have a disciplined and consistent process for developing software. SSA has already experienced problems, however, in developing its first major software application for use in its new network. These problems include (1) using programmers with insufficient experience, (2) using software development tools that have not performed effectively, and (3) developing initial schedules that were too optimistic. We have reported that these problems have collectively contributed to a delay of over 2 years in implementing this new software. Although SSA has begun to take steps to better position itself to successfully develop and maintain its software, it faces many challenges as it works to develop software in its new computer network environment. SSA faces another systems challenge—one of the highest priority—that affects not only its new network but computer programs that exist for both its mainframe and personal computers. Most computer software in use today is limited to two-digit date fields, such as “97” for 1997. Consequently, at the turn of the century, computer software will be unable to distinguish between 1900 and 2000 because both would be designated “00.” By the end of this century, SSA must review all of its computer software—about 30 million lines of computer code—and make the changes needed to ensure that its systems can handle the first change to a new century since the computer age began. This year 2000 software conversion must be completed to avoid major service disruption, such as erroneous payments or failure to process benefits, at the turn of the century. Errors in SSA programs could also cause difficulties in determining who is eligible for retirement benefits. For example, an individual born in 1920 could be seen as being 20 years old—not 80—and therefore ineligible for benefits. Similarly, someone born in 1980 could be seen as 80 years old—not 20—and therefore entitled to receive Social Security benefits. Beginning work on this problem in 1989, SSA has reviewed and corrected about 50 percent of the computer code that must be checked, according to its Deputy Commissioner for Systems. To complete the job, SSA estimates that it will take about 350 work-years. Agency officials reported that the amount of resources dedicated to the year 2000 effort could impact staff availability for lower priority projects and SSA’s ability to tackle new systems development work. SSA recognizes that to maximize the effectiveness of its reengineered work processes and investments in technology, it must invest in ongoing employee training and career development. Ultimately, SSA envisions a less specialized workforce with a broader range of technical skills that can be flexibly used in areas of greatest need. In addition, SSA has taken steps to reduce its number of supervisors, as part of the administration’s efforts to eliminate unnecessary bureaucracy by working with fewer supervisory layers. To manage these changes, SSA is training some of its headquarters employees in the concepts and techniques of teamwork. To manage with fewer supervisors in its field operations, SSA also plans to work with its unions to test a number of team concepts. Complicating SSA’s efforts is its aging workforce: 51 percent of SSA’s senior executives and 35 percent of its mid-level managers are eligible to retire over the next 5 years. In the last 2 fiscal years, SSA has lost two of its seven Deputy Commissioners to retirement. SSA has acknowledged the importance of having skilled managers to prepare for the demands of heavier workloads, new technology, and expected changes in its employee and client base. However, it has been nearly 5 years since SSA has conducted an executive-level management development program. SSA also has not selected candidates for its mid-level management development program since 1993. The agency recognizes the need for management development programs but has not yet scheduled future programs. Although SSA has begun to discuss its use of improved technology and a more flexible workforce to conduct its business in new ways in the future, it has maintained its traditional service delivery structure, including 1,300 field offices. Given the significant changes facing SSA, it has not adequately considered whether its current service delivery structure is really what is needed for the future. According to SSA officials, the agency has not developed specific plans for restructuring its organization and redeploying staff in response to demographic and workforce changes and shifting customer expectations. As noted earlier, the demand for SSA’s 800-number telephone service continues to grow, and SSA’s surveys show that callers prefer to use the telephone for more and more of their business. Customer feedback also indicates that customers would like to complete their business in a single contact. Over time, SSA will likely need to restructure how it does business to cost-effectively meet changing customer preferences; this may ultimately involve office closures. Issues of where, how, and by whom work will be done entail sensitive human resources issues and may have negative impacts on local communities; to resolve these, SSA will need to work closely with its unions, employee groups, and the Congress. To improve its 800-number service, for example, SSA has many initiatives under way, which we are reviewing. SSA currently has 37 teleservice centers. Studies indicate that this is far too many teleservice centers to operate SSA’s 800-number system in the most cost-effective way. A 1990 report from HHS’ Inspector General, for example, indicates that SSA could operate more efficiently and cost-effectively with one-third the number of centers it currently has. SSA has studied this issue but has not developed specific plans for reducing the number of teleservice centers. As the 21st century approaches, SSA faces dramatic challenges: funding future retirement benefits, rethinking disability processes and programs, combating fraud and abuse, and restructuring how work is performed and services are delivered. How SSA performs in these areas can have a powerful effect on its success in fulfilling its mission and on the public’s confidence in this agency and the federal government. To help SSA meet these challenges, the Congress took steps through the independence legislation to build public confidence in and strengthen the agency. The independence legislation provides that SSA’s Commissioner be appointed by the President with the advice and consent of the Senate for a fixed 6-year term, with removal from office by the President only for a finding of neglect of duty or malfeasance in office. As the Congress was considering the legislation, we testified that a fixed term of several years for the Commissioner would help stabilize and strengthen SSA’s leadership. We continue to support the need for a fixed term. The legislation also calls for a fixed 6-year term for a Deputy Commissioner, also to be appointed by the President with the Senate’s advice and consent. The Commissioner and Deputy Commissioner head the leadership team needed to address the agency’s existing problems and manage its future challenges. SSA’s efforts to maintain an effective cadre of leaders are complicated by the impending retirement of many of its executives and managers and by the absence of a Commissioner and Deputy Commissioner with the stability of fixed terms. This leadership must be in place for SSA to progress on the four fronts we have highlighted. First, SSA must step up to its role as the nation’s expert on Social Security issues; it is uniquely positioned to inform the public policy debate on the future financing and structure of Social Security. Second, SSA must redesign the disability claims process and place greater emphasis on return to work in its disability programs. To increase the redesign project’s likelihood of success, SSA needs to focus on those initiatives most crucial to producing significant measurable reductions in claims-processing time and administrative cost. SSA also needs to place greater emphasis on return to work by changing both the design and administration of the disability programs. Third, SSA must better protect taxpayer dollars. As the administrator of the nation’s largest cash welfare program, SSA must ensure program integrity in SSI. Reports of fraud and abuse trigger public perceptions that SSA is not making cost-effective and efficient use of taxpayer dollars. Finally, SSA must manage technology investments and its workforce and make difficult decisions about handling increasing workloads with reduced resources. It must also continue to focus on and closely manage its year 2000 conversion to help ensure that SSA will move into the 21st century with systems that function correctly. Moreover, as SSA prepares to meet greater demands and changes in its employee and client base, it may have to make difficult workforce decisions to better respond to customer needs. For example, SSA may need to close offices and move its workers to different locations to better meet growing demand. In an environment of shrinking budgets and increased expectations for government agency performance, ensuring that agency decisions are based on comprehensive planning and sound analyses will be even more essential. SSA’s success in meeting these challenges is critical. The agency is all important, touching the lives of almost all Americans. How it meets its challenges as it moves into the next century can make a significant difference in the well-being of America’s vulnerable populations—the aged, disabled, and poor—and in how the public feels about its government. In commenting on a draft of this report, SSA discussed the accomplishments of Commissioner Chater during her tenure and stated that many challenges remain. The agency also made technical comments on our report, which we incorporated where appropriate. See appendix II for a copy of the agency’s comment letter. We are sending copies of this report to the Commissioner of the Social Security Administration and other interested parties. Copies also will be available to others on request. If you or your staff have any questions concerning this report, please call me on (202) 512-7215 or Cynthia M. Fagnoni, Assistant Director, at (202) 512-7202. Other major contributors to this report include Gale C. Harris and Valerie A. Rogers. Percent of public “very well informed” or “fairly well informed” about Social Security Number of personal earnings and benefit estimate statements issued upon request and automatically Percent of people who rate SSA service as “courteous” or “very courteous” Percent of people who rate SSA service as “good” or “very good” Percent of Social Security numbers issued within 5 calendar days after receipt of needed information Percent of earnings items posted correctly Percent of OASI claims paid when due or within 15 days from effective filing date An OASI initial payment accuracy rate An SSI initial payment accuracy rate Percent of DI claims decided within 6 months after onset or within 60 days after the effective filing date, whichever is later Percent of SSI disability claims decided within 60 days of filing Number of DI and SSI initial disability claims processed Percent of Disability Determination Service decisional accuracy Percent of hearings decisions made and notices sent within 120 days of filing Percent of budgeted continuing disability reviews (CDR) processed to completion Percent of people with an appointment who have waiting times of 10 minutes or less in a field office Percent of people without an appointment who have waiting times of 30 minutes or less in a field office Percent of callers who reach 800 number within 5 minutes Percent of calls handled accurately (continued) SSA Disability Redesign: Focus Needed on Initiatives Most Crucial to Reducing Costs and Time (GAO/HEHS-97-20, Dec. 20, 1996). SSA Benefit Statements: Well Received by the Public but Difficult to Comprehend (GAO/HEHS-97-19, Dec. 5, 1996). Social Security Disability: Alternatives Would Boost Cost-Effectiveness of Continuing Disability Reviews (GAO/HEHS-97-2, Oct. 16, 1996). Supplemental Security Income: SSA Efforts Fall Short in Correcting Erroneous Payments to Prisoners (GAO/HEHS-96-152, Aug. 30, 1996). Supplemental Security Income: Administrative and Program Savings Possible by Directly Accessing State Data (GAO/HEHS-96-163, Aug. 29, 1996). Social Security Administration: Effective Leadership Needed to Meet Daunting Challenges (GAO/T-OCG-96-7, July 25, 1996 and GAO/HEHS-96-196, Sept. 12, 1996). SSA Disability: Program Redesign Necessary to Encourage Return to Work (GAO/HEHS-96-62, Apr. 24, 1996). PASS Program: SSA Work Incentive for Disabled Beneficiaries Poorly Managed (GAO/HEHS-96-51, Feb. 28, 1996). Deficit Reduction: Opportunities to Address Long-Standing Government Performance Issues (GAO/T-OCG-95-6, Sept. 13, 1995). Supplemental Security Income: Disability Program Vulnerable to Applicant Fraud When Middlemen Are Used (GAO/HEHS-95-116, Aug. 31, 1995). Social Security Administration: Leadership Challenges Accompany Transition to an Independent Agency (GAO/HEHS-95-59, Feb. 15, 1995). Social Security Administration: Major Changes in SSA’s Business Processes Are Imperative (GAO/T-AIMD-94-106, Apr. 14, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the challenges facing the new Commissioner of the Social Security Administration (SSA). GAO found that: (1) SSA is ahead of many federal agencies in developing strategic plans, measuring its service to the public, and producing complete, accurate, and timely financial statements; (2) this gives SSA a sound base from which to manage significant current and future challenges; (3) these challenges include the aging of the baby boom generation, coupled with increasing life expectancy and the declining ratio of contributing workers to beneficiaries, which will place unprecedented strains on the Social Security program in the next century; (4) unless Congress acts, Social Security funds will be inadequate to pay all benefits by 2029; (5) SSA, however, has not preformed the research, analysis, and evaluation needed to inform the public debate on the future financing of Social Security, the most critical long-term issue facing SSA; (6) SSA has recently taken initial steps to more actively participate in the financing debate by reorganizing and strengthening its research, policy analysis, and evaluation activities; (7) also challenging SSA have been disability caseloads that have grown by nearly 70 percent in the past decade; (8) to its credit, SSA has undertaken an important effort to fundamentally redesign its inefficient disability claims process, however, while SSA has begun many of its planned initiatives, none is far enough along for SSA to know whether specific proposed process changes will achieve the desired results; (9) SSA has not sufficiently promoted return-to-work efforts in the administration and design of its disability programs; (10) if even an additional 1 percent of the 6.6 million working-age people receiving disability benefits were to leave SSA's disability rolls by returning to work, lifetime cash benefits would be reduced by an estimated $3 billion; (11) in its Supplemental Security Income program, SSA has not done enough to combat fraud and abuse and address program weaknesses; (12) SSA faces increasing responsibilities in the future and must manage its growing workloads with reduced resources; (13) to successfully meet its workload challenges, SSA knows that it must increasingly rely on technology and build a workforce with the flexibility and skills to operate in a changing environment; (14) SSA faces significant challenges, however, in modernizing its information systems, a complex, multiyear effort that could easily cost billions of dollars; (15) compounding this challenge will be the possible loss of many senior managers and executives; and (16) at this critical juncture, effective leadership is needed so SSA can take actions to better ensure its success in the 21st century. |
The two main areas of fraud, waste, and abuse involve (1) overpayment of food stamp benefits and (2) trafficking. Concerning the first area, overpayments occur when ineligible persons are provided food stamps and when eligible persons are provided more than they are entitled to receive. In 1996, for example, the states overpaid recipients an estimated $1.5 billion—or 6.92 percent—of the approximately $22 billion in food stamps issued. Overpayments occur for two reasons. First, recipients make errors, either inadvertent or intentional, in providing information to the state caseworker about the recipient household’s size, income, assets, or other pertinent information needed to determine the household’s eligibility and benefit level. Second, state caseworkers make errors in determining either an applicant’s eligibility for food stamps or the appropriate level of benefits. According to 1996 data, recipient errors accounted for 57 percent of the overpayments—36 percent were unintentional errors and 21 percent were intentional. The remaining 43 percent of the overpayments were caused by caseworkers’ errors. It should be noted that errors by participants and caseworkers can also result in underpayments. FCS’ data show that food stamp recipients were underpaid approximately $518 million in fiscal year 1996. In March 1997, we reported on one specific example of food stamp overpayments—payments involving violations of the federal regulations that prohibit inmates of correctional institutions from participating in the Food Stamp Program. By matching automated food stamp and prison records in four states—California, Florida, New York, and Texas—we identified over 12,000 inmates who were included in the households receiving food stamps in fiscal year 1995. These households improperly collected an estimated $3.5 million in food stamps. Subsequently, in August 1997, the Balanced Budget Act of 1997 (P. L. 105-33, Aug. 5, 1997) included a provision directing the states to ensure that individuals who are under federal, state, or local detention for more than 30 days are not participating in the Food Stamp Program. In response to a request from the Senate Committee on Agriculture, Nutrition, and Forestry, we are currently examining the potential for computer matching to identify other ineligible populations participating in the program. With respect to trafficking—the second main area of fraud, waste and abuse—program regulations specify that participants must use food stamps only to purchase food items from the retailers authorized by FCS to accept food stamps. Once they receive them, authorized retailers are required to forward food stamps directly to financial institutions for redemption. However, numerous federal and state officials told us that food stamps have essentially become a second currency exchanged by some recipients for cash or non-food items. Trafficked food stamps may change hands several times, but all food stamps must eventually flow through an authorized retailer because only such a retailer can redeem food stamps for cash from the government. Numerous retailers are caught each year accepting food stamps from recipients, giving them a discounted value of the stamps in cash (for example, 70 cents on the dollar), and then redeeming the stamps at full face value from the government. Data on the extent of trafficking between parties prior to reaching authorized retailers are unavailable. However, a 1995 FCS study estimated that up to $815 million, or about 4 percent of the food stamps issued, was exchanged for cash by authorized retailers during fiscal year 1993. The study found that supermarkets, where over three-fourths of all food stamps are redeemed, have low trafficking rates compared with other types of retailers. The trafficking rate reported for publicly owned supermarkets (i.e., a company whose stock trades publicly) was less than 0.1 percent of food stamp redemptions, and the rate reported for privately owned supermarkets was 2.6 percent. By comparison, the trafficking rate for small, privately owned food retailers and privately owned retailers that do not stock a full line of food was 15.1 percent of the food stamps they redeemed. The Food Stamp Program is administered by USDA’s FCS in partnership with the states. FCS provides nationwide criteria for determining who is eligible for assistance and the amount of benefits recipients are entitled to receive. The states are responsible for the day-to-day operation of the program, including meeting with applicants and determining their eligibility and benefit levels. In making these decisions, state caseworkers rely on documentation provided by households and information obtained in interviews with the applicants. FCS is also responsible for authorizing retailers to redeem food stamps as well as for monitoring program compliance by the approximately 190,000 stores currently authorized to redeem food stamps. FCS and USDA’s OIG are responsible for investigating retailers suspected of violating program regulations. The OIG performs all the criminal investigations of the Food Stamp Program conducted by USDA and coordinates investigative activities with other federal agencies. Others—such as the Federal Bureau of Investigation (FBI), the U.S. Postal Service, and the U.S. Secret Service—as well as the states assist in combatting fraud. The Food Stamp Program’s Quality Control (QC) System is FCS’ primary tool for evaluating the states’ performance in issuing benefits and determining the level of overpayments. Under the QC System, the states must review a sample of their household cases each year to determine the accuracy of the eligibility and benefit determinations made by state caseworkers and the extent of payment errors—both overpayments and underpayments. FCS reviews a subsample of each state’s sample to ensure the accuracy of the states’ efforts. FCS then determines the official error rate for each state and a national error rate. If an individual state’s error rate exceeds the national error rate, FCS can sanction the state by requiring it to reimburse the federal government for a portion of the erroneous payments. On the other hand, states that have low error rates are eligible for additional reimbursement from the federal government—referred to as enhanced funding. According to USDA’s data, overpayments in the Food Stamp Program have declined since 1993. At the national level, the overpayment error rate has decreased from 8.27 percent in fiscal year 1993 to 6.92 percent in fiscal year 1996. The 1996 overpayment error rate is the lowest level ever achieved in the program. Since 1995, FCS has increased its emphasis on achieving payment accuracy and has employed various initiatives to assist the states in reducing the number of errors. The Congress appropriated over $3 million for these initiatives. Specifically, FCS’ activities include sponsoring national, regional, and state conferences; providing direct technical assistance to the states; and facilitating the exchange of state information on effective strategies for determining accurate payments. As we reported in 1995, while technical assistance and related steps are undoubtedly useful, the single most critical factor in reducing overpayments is the commitment of the managers of the states’ Food Stamp Programs to aggressively address the error rate problem. Supplementing its efforts to help the states reduce errors, FCS has implemented a new strategy for conducting its sanction activities. Historically, for a variety of reasons, the states with high error rates paid little of the sanction penalties FCS imposed. Beginning in 1996, FCS reached agreement with some of these states regarding more than $404 million in penalties owed for unacceptable error rates that occurred in fiscal years 1992 through 1996. FCS agreed to reduce this penalty to $135 million and entered into settlement agreements with these states that establish performance goals tied to payment accuracy. Of the $135 million sanction liability, the states are required to invest almost $35 million in activities that directly lead to the reduction of their error rates. If a state meets its performance goals as set out in the agreement, its share of the remaining $100 million liability will be waived. According to FCS, as a result of these settlements, the states are more actively engaged in activities to reduce errors, which should continue to have a positive effect on improving payment accuracy. With respect to trafficking, our 1995 report stated that FCS’ controls and procedures for authorizing and monitoring the retailers that participate in the Food Stamp Program did not deter or prevent retailers from trafficking in food stamps. Stores that did not meet eligibility criteria were being admitted to the program because the process for authorizing them was flawed. The single most effective deterrent to preventing ineligible retailers from being authorized is a preauthorization on-site visit. However, because of insufficient time or resources, FCS made few visits before authorizing stores to participate in the program. Furthermore, FCS’ monitoring process was inadequate to detect authorized retailers that were violating program regulations. Reports on retailers’ activities, such as total food sales and food stamp redemptions, were often untimely or inaccurate and of limited utility in identifying retailers’ trafficking. In addition, we reported that FCS had only 46 investigators nationwide to conduct investigations of retailers suspected of trafficking or violating other program regulations. Since our report, FCS has initiated several actions to reduce trafficking in the program. For example, FCS has contracted with a number of different companies to make 35,000 to 40,000 store visits by the end of fiscal year 1998. These visits will be made primarily to new stores requesting approval to participate in the program and to stores requesting reauthorization to participate in the program. In addition, FCS reports that it has improved its Store Tracking and Redemption System by, for example, developing a profile that enables FCS to better identify stores that may be trafficking in food stamps or selling ineligible items. The system also includes information on sanctions taken against the stores that violate program regulations. The system is used to screen all new applications for participation in the program in order to keep ineligible retailers from returning to the program during a period in which they are disqualified. The system is also used to monitor stores’ redemptions and to identify retailers for investigation or other administrative actions. For the states using EBT systems, FCS has developed an automated system that identifies transaction patterns in EBT data that indicate trafficking violations. In addition to FCS’ oversight activities, USDA’s OIG investigates retailers that illegally use food stamp benefits and coordinates its investigative activities with other federal and state agencies. For example, the OIG is currently looking at food stamp trafficking with state and local agencies in about 30 locations nationwide. The OIG has also taken an active role in monitoring and reviewing EBT systems and developing the automated system to analyze EBT data to identify fraud in the Food Stamp Program. Outside of USDA, a number of federal investigative agencies also play a role in the process. For example, the FBI investigates criminal violations of USDA programs if the violation has involved bribery, organized crime, or major fraud perpetrated by federal employees. The FBI focuses its investigations on links between food stamp trafficking and other criminal activities under the Bureau’s jurisdiction, such as narcotics, terrorism, and white-collar crime. The U.S. Postal Inspection Service has primary responsibility for enforcing laws concerning property in the custody of the U.S. Postal Service. The Postal Inspection Service’s main investigative focus for the Food Stamp Program is the theft of food stamps and EBT cards from the Postal Service’s custody prior to receipt by the food stamp recipient. Finally, the U.S. Secret Service investigates food stamp counterfeiting and makes recommendations on security measures relating to EBT cards. States are responsible for investigating and prosecuting individuals suspected of falsifying information in order to obtain food stamps and misusing their food stamps—such as selling their stamps for cash. States sometimes work with FCS or USDA’s OIG to investigate retailers’ fraud and abuse in the program. In an effort to supplement federal efforts to investigate retailers, FCS has used State Law Enforcement Bureau agreements. Under these agreements, FCS provides the states with food stamp coupons to use in conducting their own trafficking investigations. FCS has established agreements with 32 states, but only 10 states have conducted sustained efforts against food stamp trafficking. Food stamp benefits have historically been distributed in the form of printed coupons. In Reading, Pennsylvania, in 1984, however, FCS piloted the use of an alternative delivery system—EBT. Since this pilot, there has been increasing interest in moving to EBT systems. Under such systems, recipients receive plastic debit cards to obtain their food stamps and pay for purchases through point-of-sale terminals installed at check-out counters in food stores. At the time of the purchase, recipients enter a personal identification number. The EBT computer then verifies that sufficient funds exist in the recipient’s food stamp account, debits the purchase amount from the recipient’s account, and credits it to the retailer’s account. At the end of each business day, the authorized sales are totaled and funds are transferred electronically to the retailer’s bank account. Currently, 11 states have implemented EBT systems statewide. Eight of these states deliver multiple program benefits with their EBT system, including other federal and state programs. Additionally, 16 states use EBT systems in selected counties. All the remaining states are in the process of implementing EBT systems. Collectively, EBT systems supply almost 20 percent of all food stamps. By eliminating paper coupons that may be lost, sold without any record of the sale, or stolen, EBT systems can help cut back on food stamp fraud. EBT systems reduce the likelihood of benefit theft. Of most importance, however, EBT systems create an electronic record of each food stamp transaction, making it easier to identify and document instances where food stamp benefits are trafficked. EBT data include the following information on each transaction: the exact time of day, the amount, the recipient’s identity, the store’s identity, and the specific cash register in the store. Reviewers can use these data to identify suspicious transactions or transaction patterns. Since 1991, USDA’s OIG has opened 234 cases involving food stamp trafficking as a result of analyzing EBT data. The stores involved in these cases redeemed over $70 million in EBT food stamps, of which the OIG identified over $27 million as being fraudulent. The August 1996 Welfare Reform Act explicitly states that EBT data alone can be used as evidence to take action against retailers violating the Food Stamp Act. As a result, FCS can now use EBT data to levy administrative sanctions against retailers caught trafficking, such as permanently barring them from the program and imposing fiscal penalties, without the expense of criminal investigation and prosecution. The legislation also mandated nationwide implementation of EBT systems by October 1, 2002. While EBT systems make a major contribution to reducing certain aspects of food stamp fraud, they will not eliminate all fraud. Even in states where EBT systems have been implemented statewide, trafficking is still occurring. In such cases, a store owner accepts a card, gives the recipient a discounted value of the benefits in the recipient’s account in cash, then claims the full value of the benefits from the government. Furthermore, because EBT systems are simply another vehicle for distributing benefits, they cannot correct fraud, waste, and abuse that occurs during the process of determining eligibility and benefit levels. Unless better ways are found to verify applicant-supplied information and to avoid errors made by state caseworkers, individuals will continue to receive benefits to which they are not entitled, regardless of whether these benefits are distributed by coupon or EBT systems. In addition, moving to EBT systems is not without costs. Substantial investments must be made in computer systems, point-of-sale terminals, and other hardware. FCS reported that initially EBT systems were more expensive to operate than conventional coupon systems. More recent estimates suggest that EBT systems have become more cost competitive. EBT costs are expected to continue to diminish as the technology becomes more widely used. In any event, however, as we reported in 1994 and continue to believe today, EBT systems will likely be most cost effective if they are used to deliver food stamps in conjunction with other federal and state assistance programs such as Temporary Assistance to Needy Families and the Special Supplemental Nutrition Program for Women, Infants and Children. In this way, overhead costs can be spread across a larger program volume and serve the purposes of multiple programs. Thank you again for the opportunity to appear before you today. We would be pleased to answer any questions you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed fraud, waste, and abuse in the Food Stamp Program, focusing on: (1) the nature and extent of the problem; (2) the roles and responsibilities of the major federal agencies involved in minimizing it; and (3) the potential of electronic benefits transfer (EBT), a system of benefit delivery that replaces the traditional food stamp coupons with a debit card, to reduce it. GAO noted that: (1) fraud, waste, and abuse in the Food Stamp program takes two primary forms: (a) overpayments to food stamp recipients; and (b) the use of food stamps to obtain cash or other non-food items, a process known as trafficking; (2) according to the Department of Agriculture (USDA), in fiscal year (FY) 1996, about $1.5 billion was paid out to individuals who either should not have received any food stamps at all or received more than they were entitled to receive; (3) these overpayments represented nearly 7 percent of the approximately $22 billion in food stamps provided; (4) overpayments are caused by both inadvertent and intentional errors made by recipients and errors made by state caseworkers; (5) program regulations specify that recipients use food stamps only to purchase food from authorized retailers; (6) USDA recently estimated that up to $815 million in food stamps, approximately 4 percent of the food stamps issued, were traded for cash in FY 1993 through retail stores; (7) numerous retailers are caught each year paying recipients a discounted value of the stamps and then redeeming the stamps at full face value; (8) there are no reliable data on the extent of trafficking that occurs before food stamps are redeemed by an authorized retailer; (9) the Food and Consumer Service (FCS) administers the Food Stamp Program in partnership with the states, which are responsible for the day-to-day operation of the program, including determining applicants' eligibility and benefit levels; (10) FCS provides criteria for determining eligibility and the amount recipients are entitled to receive; (11) FCS monitors the accuracy of state benefit determinations and operates a system of incentives and sanctions to encourage states to reduce the errors; (12) FCS approves retailers to participate in the program and monitors and investigates their activities to identify those potentially violating program regulations; (13) USDA's Inspector General devotes a substantial share of its audit and investigative resources to identifying program irregularities, especially trafficking; (14) EBT systems have the potential to reduce some aspects of the fraud, waste, and abuse in the Food Stamp Program, but not others; (15) by providing a clear paper trail of all food stamp transactions, EBT systems help reviewers identify trafficking activities and remove or prosecute retailers engaged in such activities; (16) EBT systems also address problems associated with food stamp theft; and (17) EBT cannot address problems associated with determining eligibility or benefit levels. |
Active and reserve Marine Corps artillery units use the M198 howitzer for all direct support, general support, and reinforcing artillery missions. Army light cavalry units use the M198 for direct support, whereas airborne and airmobile infantry units use the M198 only for general support and reinforcing missions. The M198 howitzers, first delivered to the services in 1979, are approaching the end of their 20-year service life. Marine Corps and Army users of the M198 want to replace the 15,600-pound howitzer with a lighter-weight weapon to ease the operational burden on crews and to improve air and ground mobility. The Marines have found it difficult to tow the M198 over soft terrain, and only their heavy-lift helicopter can move the weapon by air. With the Marine Corps leading the development of a new light-weight howitzer, in September 1995, the two services signed a joint operational requirements document calling for a 155-mm howitzer that (1) weighs 9,000 pounds or less and (2) fires munitions at least 30, but preferably 40, kilometers. Initially, the Marine Corps wanted to accelerate development of a light-weight howitzer to enable fielding by 2001 or earlier but found that acceleration would be too costly. The Marine Corps now plans to field the first light-weight howitzers in fiscal year 2002, and the Army in fiscal year 2005. The Marine Corps wants to buy 598 of the light-weight howitzers and the Army 347. Development and procurement of these weapons is estimated to cost about $1.4 billion. Marine Corps and Army users of the M198 howitzer have reported a variety of recurring maintenance problems. Some of the more serious problems have been resolved. According to the Marine Corps and Army weapon system managers, solutions have been identified for most of the other problems, but funds have not been provided to make the fixes. Data compiled from Marine Corps and Army equipment readiness reports indicate that despite these problems, the availability of the M198 has not been substantially affected. Although some units reported availability dropping below 70 percent in some instances, this condition was usually corrected within a few months. In 1994, a joint Marine Corps and Army team of experts visited five major active duty Marine Corps and Army artillery units to identify and quantify the problems with the M198 howitzer, as reported by using units. This team found 15 recurring problems. The most serious recurring problems reported were the following: Trunnion bearings were worn or had disintegrated. Worn or disabled bearings affect the alignment of the gun tube and the accuracy of projectiles fired from the howitzer. Improper alignment could cause projectiles to miss the target and could endanger friendly troops. When firing the howitzer with the maximum powder charge, cracks were discovered in the towers of the upper carriage. These towers hold the gun tube in place. If the cracks in the towers are too severe, the gun tube could back up too far during recoil and injure the crew. Travel locks crack and sometimes break when the M198 is being towed. If the locks were to break completely during movement of the M198, the gun tube could fall to the ground. Broken travel locks may damage the M198’s elevation mechanism and equilibrators and make the weapon inoperable. Leaks found in recoil mechanism seals could limit howitzer operations. A properly operating recoil mechanism absorbs the shock of the weapon when it is fired and returns the tube to the proper position. Severe leaks might cause metal contact, which could result in seizure of parts and general failure of the recoil mechanism. Tires are prone to blowouts because they were not rated to carry the weight of the howitzer. According to the Army weapons manager, during 1994, users of the M198 reported about 25 to 30 blowouts a month. When a blowout occurs, the howitzer cannot be fired, and crews must either wait for a new tire to be mounted by direct support maintenance personnel or use one of the prime mover’s tires. In addition, delays in the delivery of certain parts have had an adverse effect on the availability of the M198 fleet. According to the Army and Marine Corps weapons managers who are responsible for maintaining the M198 howitzer, problems with the trunnion bearings, upper carriage towers, and recoil mechanisms have been or are being resolved. They also said that they have identified potential fixes to the travel locks and the tires but have not been provided the funds to implement them. Trunnion bearings can now be replaced by maintenance units located near the users. Until recently, only depot-level repair shops could replace these bearings, but authority to replace the bearings was delegated to the Marine Corps’ fourth echelon maintenance units and the Army’s general support units, which are generally collocated with users. In January 1994, the Marine Corps and the Army completed a modification intended to keep upper carriage towers from cracking. According to the M198 weapons managers, users have not reported any cracks in the towers since the repairs were completed. According to the Department of Defense (DOD), the cause of recoil mechanism leaks is not entirely understood. For howitzers in long-term storage, leaks have been attributed primarily to seals that failed if the mechanism was not exercised regularly. Exercisers for the recoil mechanism are being developed and are expected to be fielded by June 1996. However, the cause of leaks found in howitzers used on a daily basis has not been determined. According to the Army weapons manager, the Army’s Armament and Chemical Acquisition and Logistics Activity (ACALA) has considered installing a shock-absorbing system on the M198 to resolve the problem of cracks in the travel lock area. However, ACALA has not been provided the estimated $750,000 needed to fully study this potential solution. The manager said that although the Army and Marine Corps could simply strengthen the travel lock area, stress would be transferred to other points of the howitzer that could be more difficult to identify and repair. Users have asked for better tires for the M198. According to the Army weapon system manager, several manufacturers have recently offered the Army tires that may be capable of supporting the weight of the M-198. The Army is testing these tires. However, the weapons manager has not been provided funds to buy them. Although recurring maintenance problems are reported, availability data reported by using units to Marine Corps and Army weapons system managers indicate that the M198 fleet has a high availability rate. The availability rate reported by Army users from January 1989 through August 1995 averaged about 93 percent. During the same period the availability rate reported by Marine Corps M198 units averaged 89 percent. Army artillery unit officials said that the M198 could have relatively high equipment availability rates and recurring maintenance problems at the same time. If a problem can be repaired within 24 hours, it is not reflected in equipment readiness reports. Our examination of one active Army battalion’s maintenance records (June 1993 to March 1995) showed that seven of its 24 M198s had problems that rendered them inoperable for more than 10 days. Of the seven, two were inoperable for 30 and 39 days, respectively. However, according to the maintenance officer of this battalion, a majority of the problems were fixed within 24 hours. There is no consistent view regarding the state of the M198. Some users of the M198 believe that these weapons will not last until a new howitzer is fielded in fiscal year 2002. Officials of the Army’s 18th Field Artillery Brigade expressed concern that the howitzer may not last its expected 20-year service life without a significant life-extension or product improvement program. They said that to reduce maintenance problems and extend the service life of the M198, about half of their oldest weapons are being sent to ACALA to be rebuilt and are being replaced with newer M198s from lower priority Army Reserve and National Guard units. Similarly, the Marine Corps has begun to rotate newer M198s from maritime prepositioning stocks to active artillery units. According to the 1st Marine Division, the M198’s 20-year service life is overly optimistic because maintenance problems already identified may be symptomatic of other problems that have not yet been identified. In addition, a former artillery battalion commander of the division noted that the division’s M198s receive the greatest use because in addition to providing direct support, general support, and reinforcing missions, they also lend their M198s to other Marine artillery units for training in the rough terrain of 29 Palms, California. Contrary to the views of Army and Marine Corps users, the Army’s M198 weapons manager told us that the M198 can be maintained in service indefinitely, since direct or general support repair facilities can replace almost all parts, and enough M198-unique parts are available to meet the services’ peacetime needs for 2-1/2 years. However, according to DOD, nonavailability of common user parts procured and distributed by the Defense Logistics Agency has created some significant delays in the repair of some M198s. The Marine Corps’ weapons manager does not believe that the M198s can be sustained indefinitely but said that recent initiatives to repair major problems have improved the availability of the howitzer. Availability rates for the Marines’ M198s have remained above 91 percent from May through August 1995. According to users, Marine Corps doctrine, and systems development officials, poor mobility of the M198 is the main reason requiring its replacement. A new, light-weight howitzer, currently in development, is expected to be easier to operate and move on the ground and in the air. However, a howitzer weighing 9,000 pounds may not be capable of firing munitions any farther than the M198. To achieve ranges beyond those of the M198, the new howitzer would have to be made heavier, or a new family of extended-range munitions would need to be developed. The XM982, an extended-range rocket-assisted projectile currently being developed under a separate program and expected to be usable in the new howitzer, may achieve the desired 40-kilometer range. The 5-ton truck assigned as the Marine Corps’ prime mover of the M198 has difficulty towing the 15,600-pound howitzer over soft terrain such as sand. According to an artillery systems development official, although the Gulf War was the perfect situation for artillery because there was no mud, the Marine Corps found it difficult to move the M198 by land and air during Operation Desert Storm. To resolve the problem, the Marine Corps is remanufacturing its 5-ton truck fleet with a stonger power train and a 22,000-pound towing capacity, which will allow it to move the M198 over most types of terrain. This program is funded, and the first remanufactured vehicles are expected to be delivered in fiscal year 2001. The Marines can now airlift the M198 only with its CH-53E heavy-lift helicopter and only under optimal weather conditions. The Marine Corps has assumed that its new medium-lift aircraft now in engineering and manufacturing development, the MV-22 Osprey, will be able to lift the new light-weight howitzer. However, Osprey prototypes have not demonstrated that they can lift the required 8,300 pounds or demonstrated their ability to lift actual cargo. Program officials are optimistic that the Osprey will be able to lift a 9,000-pound load safely but told us that they do not know whether a howitzer can be made sufficiently aerodynamic and stable to allow for its safe movement by the Osprey. Although it uses the same truck, the Army has had fewer problems towing the M198 than the Marine Corps. The Army’s 18th Airborne Corps successfully transported the M198 in the sand throughout Operation Desert Storm. Army and Marine Corps officials told us that the reason for the difference may lie in how the two services use the M198. The Marine Corps uses the M198 for direct support and general support missions. The direct support mission requires the M198 units to closely follow supported units, often over difficult terrain. The Army uses the M198 only for general support missions, which may allow firing units to avoid difficult terrain. The Army has no problem lifting the M198 with its medium-lift CH-47D helicopter, a system the Marine Corps does not own. The CH-47D can lift up to 22,000 pounds of cargo and easily carries the M198, its crew, and a limited load of ammunition, in all but the hottest weather. The Army and Marine Corps have been testing two light-weight howitzer prototypes, and a third is expected to be available for a shoot-off in fiscal year 1996. While these prototypes are expected to meet the weight requirement, they probably will not fire beyond 30 kilometers. DOD said that targets beyond 30 kilometers can be attacked with the extended range Multiple Launch Rocket System, by aircraft, or by a new rocket-assisted projectile currently in development. According to the Joint Operational Requirements Document (JORD) for a new light-weight howitzer, it must be able to fire projectiles 30 kilometers, which is the same range as the M198’s. The Army agreed to this range, although it had initially desired a light-weight howitzer with a range of up to 40 kilometers to enable counterfire against other countries’ artillery that can currently fire to that distance. The JORD now states that 40 kilometers is the desired range. However, views within the Marine Corps artillery community have differed on what the range should be. On one hand, several Marine Corps officials told us that mobility is the primary reason for wanting a lighter-weight howitzer. Those artillerymen with a direct support mission favored mobility over range. On the other hand, artillerymen with general support and reinforcing missions said they need additional range to accomplish their counterfire mission. One artillery battalion commander told us that the Marine Corps should not invest in a new howitzer that will not fire projectiles to distances significantly greater than the M198. Not having the mobility problems of the Marine Corps, the Army had wanted to take a more measured approach to the development of a light-weight howitzer to gain additional range. However, according to an official of the Program Executive Office for the light-weight howitzer development program, the Army concluded that insistence on a 40-kilometer range could delay the howitzer’s development up to 3 years. To avoid such a delay, the Army and Marine Corps agreed that the JORD would specify a minimum range of 30 kilometers and a desired range of 40 kilometers. According to DOD, technical and simulation work led to the determination that the optimal range for a towed weapons system is 30 kilometers. The JORD working group, composed of user representatives and technical experts, determined that a towed howitzer weighing 9,000 pounds and firing 40 kilometers was not technically feasible. In addition to requiring a longer development time, achieving a 40-kilometer range would require a propellant development program, which would greatly increase the cost and risk of the light-weight howitzer development program. Under another program, the Army is developing the XM982, a 155-mm rocket-assisted projectile that is expected to fire to a range of 40 kilometers. Since the XM982 is not be a precision-guided projectile, it will not be used for close support missions. If it successfully reaches the desired 40-kilometer range, the XM982 will primarily be used for counterfire missions. In written comments (see app.I) DOD agreed that maintenance problems of the M198 alone do not warrant accelerating a replacement and stated that accelerating the acquisition strategy would be cost prohibitive. DOD disagreed on two counts with our conclusion that even with the remaining problems the M198 availability rate remains high. First, DOD stated that operational reliability of the M198 over the last 2 years provides a much more realistic picture than the average availability we calculated for a 6 year period. Army officials said that operational reliability refers to the reliability of individual parts of the M198. However, according to the Army weapons manager, operational availability data on the M198 fleet is incomplete because it has not been systematically collected. He said that the availability data reported in the Unit Readiness Reporting system remains the most reliable indicator of the condition of the M198 fleet. Second, DOD said that the variability, rather than the average, of the operational reliability and availability should be considered. DOD said that between April 1991 and June 1994, the average availability rate for Army units was 91 percent and for generally the same period the rate for the Marine Corps was 88 percent. However, DOD said that during these periods, the rate dropped to 72 percent in some Army and 69 percent in some Marine Corps units. Our review of Army data indicates that the lowest availability rate reported for the overall M198 fleet was 80.7 percent in the fourth quarter of fiscal year 1991, but that the rate recovered to 91.7 percent the following month. Individual Army battalions and separate batteries reported availability rates as low as 37 percent for any one month, but in all cases, including for school support and reserve component units, availability was restored to levels above 90 percent within 3 months. We did not review availability reports from individual Marine Corps battalions and batteries but analyzed average monthly availability rates of M198s reported to the weapons manager by each of the four Marine Expeditionary Forces (MEF) from May 1993 through September 1995. According to this data, the lowest availability rate was 68.1 percent, as reported by the 2d MEF in June 1993. However, this unit reported a 90.3 percent availability 3 months later. DOD stated that we appear to argue against the need for the light-weight howitzer. We were not asked for and are not offering an opinion about whether a lighter-weight howitzer is needed. Our objectives were to determine whether maintenance problems with the M-198 justify accelerating the development of a replacement and to describe the current light-weight howitzer development program. Technical comments provided by the DOD have been incorporated in this report as appropriate. To obtain information on the current status of the M198 howitzer, we interviewed officials and reviewed documents from the Office of the Assistant Deputy Chief of Staff of the Army for Operations and Plans in Washington, D.C.; the Marine Corps Combat Development and Marine Corps Systems Commands in Quantico, Virginia; the U.S. Army Armament and Chemical Acquisition and Logistics Activity, Rock Island, Illinois; and the Marine Corps Logistics Base, Albany, Georgia. We obtained an operational perspective and discussed maintenance issues with officials from the Army’s 18th Airborne Corps and its subordinate units at Fort Bragg, North Carolina, and Fort Campbell, Kentucky, and with officials from artillery and support units of the 1st and 2nd Marine Divisions at 29 Palms, California, and Camp Lejeune, North Carolina. Finally, officials of the Joint Program Management Office, at Picatinny Arsenal, New Jersey; the Army staff; and the Army Field Artillery School, Fort Sill, Oklahoma, provided us with information on the Lightweight 155-mm Howitzer and XM982 development programs. We conducted our review between May and October 1995 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense, the Secretaries of the Army and the Navy, and the Commandant of the Marine Corps. Please contact me at (202) 512-3504 if you have questions about this report. The major contributors to this report are listed in appendix II. R. Gaines Hensley, Assignment Manager Connie W. Sawyer, Jr., Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO examined whether the Marine Corps' and Army's reported maintenance problems with the M198 howitzer justify the rapid development of a replacement weapon. GAO found that: (1) by themselves, the maintenance problems with the M198 howitzer do not justify accelerating the development of a replacement; (2) although Army and Marine Corps users of the M198 have experienced recurring maintenance problems with the howitzer, some of these problems have been resolved, and solutions to most of the remaining problems have been identified but not funded; (3) even with these problems, availability of the M198 reported by Army and Marine Corps units over the last 6 years averaged about 93 percent and 89 percent respectively; (4) the Marine Corps believes that the poor mobility of the M198 is a more important reason than maintenance for replacing it with a lighter-weight weapon; (5) however, the anticipated air mobility improvements are dependent on the ability of the MV-22 medium-lift aircraft, now in engineering and manufacturing development, to lift a 9,000-pound howitzer; (6) so far, the developmental aircraft has not shown that it can lift that weight; (7) current light-weight howitzer candidates will fire projectiles to 30 kilometers, the same range as the M198; (8) to achieve the objective firing range of 40 kilometers, the weight of the new howitzer would have to be increased, but an increase in weight could negate mobility improvements; (9) a new munition, the XM982, currently being developed by the Army independent of the light-weight howitzer development program and scheduled to become available in fiscal year 1998, is expected to achieve the desired 40-kilometer range; and (10) however, it has not yet been tested in the competing light-weight howitzer prototypes. |
The Census Bureau conducted A.C.E. on a sample of areas across the country to estimate the number of people and housing units missed or counted more than once in the census and to evaluate the final census counts. The statistical methodology underpinning A.C.E. assumes that the chance that a person is counted by the census is not affected by whether he or she is counted in A.C.E., or vice versa. Violating this “independence” assumption can bias the estimate of the number of people missed in the census and thus either overstate or understate the census undercount. The Bureau’s procedures called for it to go to great lengths to maintain this independence. As illustrated in figure 1, the Bureau developed separate address lists—one for the entire nation of about 120 million housing units and one for A.C.E. sample areas—and collected data through two independent operations. For the census, the Bureau mailed out forms for mail-back to most of the housing units in the country; hand-delivered mail- back forms to most of the rest of the country (in an operation called Update/Leave); and then carried out a number of other follow-up operations, the largest of which was called nonresponse follow-up. A.C.E. collected its response data during person interviewing from April 24 through September 11, 2000, with telephone calls or visits to about 314,000 housing units. A.C.E. person interviewing was managed directly out of 12 A.C.E. regional offices, independent of the 12 regional census centers from which the census was managed. A.C.E. regional offices managed person interviewing workflow at the geographic level of the local census office area out of convenience. There were 520 local census offices operating during the census, including 9 in Puerto Rico managed out of the Boston regional office, and person interviewing took place in the area of each. The person interviewing operation had three phases: early telephone interviewing; then interviewing conducted by personal visits; and finally nonresponse conversion, when difficult cases were reassigned to the operation’s best interviewers to reduce the number of noninterviews. In each phase, interviewers relied on an automated survey instrument and databases stored on laptop computers assigned to each interviewer. By having the interviewers use laptops to dial in to the Bureau’s servers, the Bureau could manage cases automatically and remotely. In its initial design for the 2000 Census, the Bureau planned a “one-number” census that would have integrated the results of a survey similar to A.C.E. with the traditional census to provide one adjusted set of numbers by December 31, 2000. However, the U.S. Supreme Court ruled in January 1999 that statistical sampling could not be used to generate population data for reapportioning the House of Representatives. Following that ruling, the Bureau abandoned its plans to conduct a one-number census using statistical methods integrated into the final census counts. On December 28, 2000, the Bureau delivered its population counts for purposes of reapportioning seats in the House of Representatives to the Secretary of Commerce for his transmission to the President. On March 1, 2001, a committee of senior Bureau executives recommended that unadjusted census data be released as the Bureau’s official redistricting data. The Acting Director of the U.S. Census Bureau concurred, and the Secretary of Commerce announced on March 7, 2001, his decision to release the unadjusted data. On October 16, 2001, the Acting Director of the Census Bureau decided that unadjusted data should be used for nonredistricting purposes as well as for postcensus population estimates and for benchmarks for other federal surveys. The Bureau is continuing to investigate issues related to A.C.E. and the Census, and the results of that investigation are expected to influence the Bureau’s planning for the 2010 Census. To meet our objectives and review the implementation of person interviewing, we examined relevant Bureau program and research documents, such as procedures memorandums and analysis of Census tests. Further, we reviewed data from the Bureau’s “cost and progress” management information system, which Bureau officials used to monitor the conduct of census and A.C.E. operations. To help validate and expand on the cost and progress data, we interviewed Bureau headquarters and regional officials. We also interviewed key Bureau officials from headquarters and, where applicable, regional officials responsible for the planning and implemention of the person interviewing operation. Although we verified with Bureau officials that the data were final, we did not independently verify data contained in the Bureau’s cost and progress management information system. To obtain a local perspective on how person interviewing was implemented, we interviewed temporary A.C.E. workers in 12 locations, covering over 60 local census office areas (out of the 520 in the United States and Puerto Rico) and corresponding to 8 of the 12 census regions (Atlanta, Boston, Charlotte, Chicago, Dallas, Denver, Los Angeles, and Seattle). To provide further context, we also interviewed A.C.E. managers of seven of these regions. We selected these areas primarily for their geographic dispersion, variation in type of enumeration area, and their proximity to our field offices. The results of the visits could not be generalized to all person interviewing. In addition to these field locations, we performed our audit work on eight of the Bureau’s A.C.E. regions at Bureau headquarters in Suitland, MD; as well as in Washington, DC, from June 2000 through January 2001, in accordance with generally accepted government auditing standards. On September 7, we requested comments on a draft of this report from the Secretary of Commerce. On October 5, 2001, the Secretary of Commerce forwarded written comments from the Bureau (see appendix I), which we address in the “Agency Comments and Our Evaluation” section of this report. The Bureau appears to have generally completed person interviewing according to its operational schedule. Failure to collect data in a timely manner could have reduced the interview completion rate or increased the Bureau’s dependence on less reliable sources of data, such as proxy data, thus reducing the quality of data collected. In addition, the Bureau believes that the more time that passes from Census Day (April 1) to the time of the survey interview, generally the more likely that the survey respondent will err in his or her recall of Census Day information. Finally, data processing and other operations depended on the data from person interviewing, and could have been delayed had person interviewing not been completed on time. About 84 percent—439—of 520 local census office areas completed all of their fieldwork at least 2 weeks before the end of the operation in their areas. By the deadline for completing all person interviewing, September 1, 2000, only 45 cases (out of over 314,000 nationally) remained to be completed, and they were all in the area of a single local census office, which had received an extension of its deadline. The timely completion of person interviewing was due in part to the Bureau’s ability to conduct a much higher share of the person interviewing caseload by telephone than it had anticipated. Although the Bureau anticipated that about 40,000 cases (about 13 percent) of the person interviewing caseload would be completed by telephone, the actual amount was much higher—about 90,000 cases (over 28 percent) . Bureau officials informed us that more people provided their telephone numbers on their census returns and more people returned their census forms than the Bureau had anticipated. The telephone phase of the operation was limited to cases in which households had provided telephone numbers with their census responses—about 40 percent of the roughly 314,000 total person interviewing cases. As figure 2 illustrates, the share of the workload completed by telephone varied across regions, ranging from 19 to 34 percent. It also varied across local census office areas, ranging from less than 1 to over 55 percent. Bureau officials explained that this variation was related to the eligibility criteria, which further limited telephoning to households with city-style addresses (for example, 123 Main Street) that were not in small multiunit dwellings. Completing interviews over the telephone reduces the travel time for interviewers and can thus decrease the cost of each interview completed. Person interviewing did not progress equally in all local census office areas. About 3 percent of the national person interviewing caseload had to be reassigned to nonresponse conversion, which took place in the final 2 weeks of interviewing in a given area. This compares to the about 2 percent of the person interviewing that was completed during an ad hoc nonresponse conversion operation during 1990. As shown in figure 3, about 36 areas out of the 520 had over 10 percent of their cases reassigned to this last phase, and 9 areas had more than one-fifth of their caseloads reassigned to this phase. The New York region had almost 13 percent of its caseload referred to nonresponse conversion. Bureau officials told us that, in response, the New York region brought in professional interviewers from other nondecennial survey work to conduct its nonresponse conversion, hoping to ensure a high-quality interview process. During the person interviewing operation, the Bureau carried out a quality assurance program, which focused primarily on detecting interviews falsified by interviewers. Bureau officials designed the person interview quality assurance program to detect when interview results had been submitted but the interview had not been done, because Bureau research suggested that the most common type of falsification was the falsification of an entire interview. Outside the quality assurance program, a number of operational indicators associated with data quality in the past suggest that data quality may have varied locally. Under the Bureau’s quality assurance program, regional offices were to telephone or visit a 5-percent random sample of all person interview cases to determine whether an initial person interview had actually taken place. Further, according to headquarters quality assurance managers, regional quality assurance managers were to select about another 5 percent relying on automated “outlier reports” and other criteria. For example, supervisors were required to select additional cases for quality review when outlier reports showed that an interviewer had a relatively high percentage of vacant housing units or interviews conducted at unusual hours and thus might be falsifying data. Every interviewer was to have at least one case covered by quality assurance. As an additional check, the Bureau provided quality assurance supervisors with reports on respondents’ names so that they could look for indicators of possible falsification by interviewers, such as names of famous characters/people or multiple respondents with the same name. Figure 4 illustrates the cumulative share of each of the 12 regions’ person interviewing caseload that was reviewed by the quality assurance program. By the end of the operation, the national share exceeded 11 percent, and each of the 12 regions was near or exceeded the target ratio of about 10 percent of the person interview workload. Headquarters A.C.E. quality assurance managers said that the percentage of the interview caseload selected for quality assurance review was expected to vary depending on a number of local circumstances. For example, where quality reviews raised the suspicion of fraudulent interviews, supervisors were to select more of those interviewers’ cases for review, further increasing the percentages reviewed by quality assurance in those areas. As figure 4 illustrates, in some of the regions with higher percentages of cases suspected of falsification by interviewers, supervisors did indeed select a higher percentage of cases for review. Nationally, less than 3 percent of the quality assurance cases were suspected of falsification, although across regions the percentage varied from about 1.3 to 5 percent. In comparison, Bureau evaluations of a 1998 dress rehearsal of person interviewing reported suspected falsifications of from about 0.9 to 3.1 percent of the quality assurance cases at the three different rehearsal sites. Although the sites of the dress rehearsal were not representative of the whole nation, they provide a reasonable benchmark for the 2000 census. As the Bureau noted in its response to our draft report, none of the sites were exceptionally hard-to-enumerate areas, which tend to have higher rates of falsification. So, that 6 of the 12 regions, each with hard-to-enumerate areas, had their rates fall into this range, demonstrates in part that the Bureau’s person interviewing experienced low rates of suspected falsification in 2000. Each case suspected of falsification was to be reinterviewed, as was the entire workload of any interviewer found to have actually falsified a case. A total of 1,004 cases (2.8 percent of the quality assurance caseload) had their data replaced by these reinterviews. After further investigating the cases suspected of falsification, Bureau officials believed that about 0.1 percent of the randomly selected quality assurance caseload stateside (0.2 percent including Puerto Rico) was falsified and assumed that this percentage is generalizable to the entire A.C.E. sample. The percentage of the replaced interviews that contained errors due to honest interviewer mistakes, poor respondent recall, or reasons other than falsification was not reported by the Bureau’s quality assurance program as part of the failure rate. But data that the Bureau provided later show that 2.1 percent of all randomly selected cases stateside (and 2.1 percent including Puerto Rico) were replaced. We discussed the utility of the Bureau defining, measuring, and reporting a broader measure of quality assurance failure—including failure for reasons other than falsification—with the Associate Director for Decennial, and he concurred that the Bureau should consider this in the future. All cases in the final phase of person interviewing, nonresponse conversion, were excluded from the quality assurance program. Headquarters officials said that because (1) the quality assurance was intended primarily to identify falsified interviews, (2) only the most experienced workers were used in the final phase of interviewing, and (3) the most experienced workers falsify less, no quality assurance was deemed necessary for that phase. They also pointed out that there would not have been time to check the cases completed on the last days of the operation, and a relatively small percentage of the total person interview caseload fell into this phase of the operation. According to Bureau data, the nonresponse conversion phase had a workload of about 10,000 cases, or about 3 percent of person interviewing cases nationwide. According to Bureau data, person interviewing collected information on current residents in almost 100 percent of the cases for housing units that existed, were inhabitable, and were not vacant. Following the change in the census design after the 1999 Supreme Court ruling, the Bureau no longer specified a goal or target for person interviewing response rate. However, this response rate exceeded the 95 percent expectation expressed by the Bureau’s Field Directorate in internal memorandums in 1997, as well as the 98 percent target the Bureau set prior to the Supreme Court ruling and the response rate of 98.6 percent during a similar survey during 1990—the 1990 Post-Enumeration Survey. All regions exceeded 99.7 percent, and only three local census office areas had person interview response rates lower than 98 percent. However, not all of these interviews obtained complete information on the household. About 5.1 percent of cases nationally were recorded as “partial” interviews—interviews missing either the age, sex, or Census Day residency status for one or more household members. The regions varied in their rate of partial interviews from about 1 percent to over 8 percent. When interviews were not complete, the missing data were to be provided through statistical methods, and this would be a source of error in the resulting A.C.E. calculations. And, as we have reported previously, low interview completion rates could have resulted in some segments of the population being underrepresented in the A.C.E. data, adversely affecting the accuracy of any A.C.E.-based adjustments. Bureau officials believe that numerous studies over the years have shown that their procedures for dealing with missing data have acceptable error levels. Furthermore, Bureau data show that about 5 percent of the household interviews were completed with proxy respondents, such as neighbors. According to Bureau research, proxy interviews do not generally provide information as reliable as interviews with household members, and this can be a source of error in A.C.E. calculations. Proxy interviews are also more likely to provide only partial information. The 2000 proxy rate exceeded the 2-to-3-percent proxy rate experienced during the 1990 Post-Enumeration Survey. The Boston region reported as little as 2 percent of its caseload completed by proxy, and the New York region had over 8 percent completed by proxy. Local variations in data quality may affect the accuracy of A.C.E. results for some segments of the population. Although national level data are important for determining broad trends, they often mask implementation challenges occurring at the local level. For example, as figure 5 shows, most local census office areas relied less on proxies than the national effort did, but 49 had to complete over 10 percent of their total caseloads with proxy respondents. Bureau officials said that the local areas with the highest proxy rates tended to be dense urban areas, such as in New York City, where buildings may have had restricted access, and interviewers had to rely on apartment managers for information. Similarly, most local census office areas had rates of partial interviews near or below the national rate of about 5.1 percent, but 37 had rates exceeding 10 percent, as shown in figure 6. Bureau officials explained that many of these areas with the highest partial interview rates were areas with higher proxy rates as well because proxy interviews are less likely to provide complete data. Most local areas were near or below the nation’s 0.1 percent final nonresponse rate on person interviewing cases, although one area had a nonresponse rate of 2.3 percent, and nine local areas exceeded 1 percent. Most local census office areas completed less than or about the nationwide 3 percent of their caseload during nonresponse conversion. As shown in figure 3, the percentage of the person interviewing workload completed during nonresponse conversion exceeded 10 percent in 36 of the 520 local census office areas and 20 percent for 9 areas. As discussed earlier, nonresponse conversion was not subject to the quality assurance program, although the Bureau relied on its best workers for this stage of interviewing. The Bureau reports that quality assurance was not done during nonresponse conversion because that stage involved getting cooperation from uncooperative respondents, and the later A.C.E. field operation, person follow-up, would serve as a form of quality assurance on these interviews. Early in the person interviewing operation, the Bureau experienced and resolved problems with a critical function in its automated work management system that was to allow supervisors to selectively reassign work among interviewers. The software was to enable supervisors to reassign cases that had either been sent to them for review or that needed to be reassigned from a laptop computer that was either broken or had been issued to someone who was no longer interviewing. For cases being reassigned that had not been flagged for supervisory review, the software was to ask the supervisor whether to disable the cases on the original laptop. If the cases were being reassigned to a different interviewer, supervisors were to disable the cases. However, according to Bureau officials, the software contained errors in two different places. One error resulted in cases not being disabled from an interviewer’s laptop even after the supervisor attempted to disable them. Another error resulted in certain cases being reassigned automatically, again without them being disabled on the original laptop. Both problems resulted in duplicate records on the laptops, which required supervisors to individually review and delete cases. According to Bureau officials, this confused some temporary staff and their supervisors and created some work inefficiencies. For example, some households received unplanned multiple visits by different interviewers. However, according to Bureau A.C.E. officials, the Bureau addressed the underlying programming error within 2 weeks, and the operation proceeded without a reported recurrence of this problem. An accurate address list avoids unnecessary and costly efforts to locate nonexistent residences. A measure of address list quality is the percentage of addresses that are nonexistent or “undeliverable” because interviewers were unable to locate housing units at the listed addresses. During person interviewing, 1.4 percent of housing units to be interviewed were deemed to not exist, and this was less than for other major census questionnaire delivery operations. Table 1 illustrates that the two primary census questionnaire delivery operations both experienced initial undeliverability rates greater than the share of nonexistent housing units during person interviewing. In comparison, person interviewing during the 1990 Post-Enumeration Survey also encountered a higher share of its caseload being undeliverable than did person interviewing in 2000. The list of addresses visited during person interviewing benefited from earlier A.C.E. operations that (1) independently canvassed all addresses in A.C.E. areas, (2) compared the initial A.C.E. address list to the initial census address list, (3) reconciled any differences by field visits, (4) flagged nonexistent addresses in A.C.E. sample areas, and (5) entirely relisted some areas that were canvassed improperly. Person interviewing attempted to visit only addresses that had been confirmed to exist in A.C.E. sample areas. As noted earlier, the Bureau’s procedures called for it to go to great lengths to ensure that A.C.E. operations were kept independent of the census operations to avoid biasing A.C.E. estimates. For person interviewing, this meant conducting the operation after the Bureau completed nonresponse follow-up activities in a local census office area; implementing controls to prevent their overlap; sharing status information about nonresponse follow-up with A.C.E. managers; and managing field activities out of 12 separate regional census offices, independent of the 12 regional census centers managing the rest of the census. However, in response to the 1999 Supreme Court ruling against the planned use of sampling to generate population data for reapportioning the House of Representatives, the Bureau reintroduced a census follow-up operation intended to improve census coverage in part by sending enumerators to households that were added to the census address list late and thus may have been missed by earlier census operations. The schedule of this operation, known as “coverage improvement follow-up,” overlapped the beginning of person interviewing, thus increasing the risk that it would violate the independence assumption. According to Bureau officials, a similar operation had overlapped person interviewing for the Post- Enumeration Survey in 1990, and delaying person interviewing further from Census Day would have increased the risk that respondents would not reliably recall their Census Day data. The risk of violating the independence assumption was increased further when the workload of the follow-up operation increased over what was projected. In December 1999 the estimated workload volume for coverage improvement follow-up was about 7.7 million addresses. Most of these were to have verified the vacancy or nonexistence of housing units previously marked vacant or for deletion. But up to about 0.8 million addresses were more likely to have enumeration interviews take place. A.C.E. designers believed that the number of coverage improvement cases in a given area would not be enough to affect A.C.E. data collection. On the basis of the actual number of addresses being covered by the operation, by June 2000 this number had risen to about 2.3 million addresses. In addition, after person interviewing had begun, the Bureau decided to revisit every census household for which the population count was unknown. According to Bureau sources, there were about 0.7 million such households. Although the Bureau had strict controls to prevent person interviewing from going door-to-door in areas where census nonresponse follow-up— the primary census field follow-up operation—was still under way, the controls did not apply to other census follow-up operations, such as coverage improvement follow-up. Automated work management rules were to prohibit person interviewing field visits from beginning in a local census office area prior to the earlier of either (1) 100 percent completion of nonresponse follow-up in that local census office area or (2) 1 week after 90 percent completion of nonresponse follow-up in all A.C.E. clusters in that local census office area. In addition, A.C.E. management had access to “early warning reports” that provided the daily status of nonresponse follow-up in each area. According to the Bureau, exceptions to the start rules had to be approved in headquarters, and the only software that would allow an earlier start to personal visits was located at headquarters. The Bureau also informed us that the regional offices did not have the ability or the authority for exceptions to be implemented, as any changes required at least Assistant Division Chief level approval. According to the Assistant Field Division Chief for Evaluation and Research, to his knowledge, no such approvals had been given. He said that these rules did not apply to other follow-up operations. Each of the eight regional directors (there are 12 in all) or their deputies with whom we spoke regarding A.C.E. independence said that no significant overlap occurred between concurrent census follow-up and person interviewing. However, most of them believed that some overlap was likely, and none of them could be certain of the extent of any actual overlap. Moreover, all of these regional directors or their A.C.E. management staffs also reported not having any communication from the census side of their operations to the A.C.E. operations on the status of these follow-up operations beyond that on the nonresponse follow-up work, underscoring their inability to control the possible overlap of census and A.C.E. fieldwork. Prior Bureau research on sample data collection detected few possible effects of overlap; and, at the time the coverage improvement follow-up operation was reintroduced, Bureau officials concluded that there was no significant risk to independence. The Chief of the Bureau’s Decennial Statistical Studies Division said that on the basis of his experience and understanding of prior Bureau research, the small risk of compromising independence was worth taking to reduce the risk of increased errors from delaying person interviewing until the Bureau completed coverage improvement follow-up. He and other headquarters officials we interviewed were unaware of any Bureau attempts to determine the extent of any possible interview overlap in 2000, which might demonstrate whether A.C.E. assumptions were operationally supported. The Bureau recently completed, as part of its Census 2000 evaluation program, a study intended to detect significant differences between the census responses in comparable A.C.E. and non-A.C.E. blocks. The study found no differences it deemed significant. The Bureau largely overcame significant challenges that could have undermined the person interviewing operation. Notably, the Bureau completed the person interviewing data collection on schedule and in accordance with its general guidelines for quality assurance coverage. The Bureau also demonstrated its ability to overcome the limited technical challenges it confronted. Furthermore, the series of A.C.E. address operations, as designed, appeared to effectively remove nonexistent housing units and addresses from the person interviewing caseload, thus reducing an otherwise inefficient use of interviewing time and resources. Still, the Bureau’s experience in implementing person interviewing highlights areas where additional research might lead to improvements if the Bureau conducts a similar operation for the 2010 Census. For example, certain operational challenges may have contributed error to final A.C.E. estimates of census undercounts and overcounts. The Bureau experienced variation at the local level in how person interviewing was carried out, in terms of response rates, proxy rates, and partial interview rates. As we have reported before, if the local census office areas with the worst values of each of these measures have populations that are typically hard to count in the census, these segments of the population may be underrepresented in the A.C.E. data, possibly leading to inaccurate reflections of these population segments in A.C.E.-based adjustments. The Bureau plans to evaluate the relationship between operational measures, such as proxy rates, and how well A.C.E. data match to census data. The results of these evaluations, and others, will provide an important basis for planning an improved 2010 census and evaluation survey. Further, although Bureau data show that the person interviewing quality assurance program met its objectives, the program focused primarily on identifying falsification and reported failure rates based solely on cases believed to have been falsified. As the Bureau looks to improve its interviewing experience further, a broader definition of quality assurance failure to include interviews the Bureau reinterviewed and replaced for other reasons would provide a more complete measure of interviewing quality. Finally, the same controls and sharing of status information to ensure independence between the census nonresponse follow-up operation and A.C.E. person interviewing were not applied or did not take place with other census follow-up operations, thus increasing the risk of compromising the independence of A.C.E. A relatively small part of the census follow-up workload was not subject to control over its possible overlap with A.C.E. person interviewing and thus the magnitude of this influence may have been small nationally; however, it could potentially have been significant in some local areas. To that extent, the A.C.E assumptions may not apply equally in all areas or for all segments of the population, with possible adverse effects on the accuracy of A.C.E. calculations. Since the Bureau will likely use an evaluation survey in 2010, perhaps similar to A.C.E., it will be important for the Bureau to learn the lessons from the 2000 Census that can be incorporated into the planning for 2010. As the Bureau documents its lessons learned from the 2000 Decennial Census and as part of its planning efforts for 2010, we recommend that the Secretary of Commerce conduct research that determines the relationship, if any, between operational measures of person interviewing, such as proxy rates, and the accuracy of A.C.E. estimates of census undercounts as planned; determines how best to define, measure, and report interview quality failure rates that include interviews rejected for all reasons, and not just for a subset of reasons such as falsification; determines and documents the extent, if any, of the actual overlap between census follow-up operations and A.C.E. person interviewing in 2000; determines whether sufficient overlap may have occurred to violate the determines whether increasing the flow of status data on specific decennial follow-up operations to the managers of independent surveys can help ensure the independence of such surveys, particularly when such operations are scheduled to overlap in the field; and determines what additional steps or controls to preserve the independence of census follow-up and person interviewing, if any, could be implemented for other census follow-up operations that collect enumeration data and are scheduled contemporaneously with person interviewing. The Secretary of Commerce forwarded written comments from the Bureau on a draft of this report. (See appendix I). The Bureau provided minor technical corrections and additional information. The Bureau also offered clarification on some of our key points and recommendations, which we have reflected in this final report and comment on in more detail in appendix I. Regarding our finding that census follow-up operations overlapped with person interviewing in the field, the Bureau provided additional information on its decision to permit overlap between census coverage improvement follow-up and the A.C.E. person interviewing operations. We recognized this context, and revised the draft to better reflect it. Nevertheless, while the Bureau’s response justifies its not adding any controls or communications or changing any procedures when it noticed that the increase in the follow-up workload was over what was projected, as we note in the report, the increase in workload increased the risk that the independence assumption was violated. There may still be opportunities to implement steps in the future to help ensure the independence of such surveys. The Bureau commented that our conclusion linking variations in data quality to possible effects on the accuracy of A.C.E. results was unsubstantiated and suggested wording it as a question. In our draft report, we had raised the link as a possibility and then recommended that the relationship, if any, be determined between operational measures and the accuracy of A.C.E. estimates. We believe that this conclusion is logical given other Bureau reporting linking data quality measures such as missing data rates to possible errors in A.C.E. results. We have also reported on this issue in the past. The Bureau said that our conclusion that the influence of census and A.C.E. overlap may have been significant in some local areas was unsubstantiated. We were unable to conclude whether significant overlap had occurred or not. As we noted in our report, the Bureau officials we interviewed were unaware of any Bureau attempts to determine the extent of any possible interview overlap in 2000, which might have demonstrated whether A.C.E. assumptions were operationally supported in the field. Without such evidence regarding the extent of the overlap, and given the anecdotal evidence, which the Bureau cites in its response and we mentioned in the draft report, that some overlap did occur, we view the conclusion that the overlap may have been significant in some areas as appropriate. We revised the text to more clearly state, however, that the effect of the overlap is a potential one. In responding to our recommendations (1) to determine the relationships, if any, between operational measures and the accuracy of A.C.E. estimates, as planned, and (2) to determine and document the extent of overlap between census and A.C.E. in 2000, the Bureau acknowledged the importance of extensive evaluations of A.C.E., and referred to the evaluation it is undertaking. We look forward to reviewing this evaluation when it is complete. Since receiving comments from the Bureau, we added one additional recommendation. The basis for this new recommendation centered on our finding that the Bureau’s quality assurance program did not report fully on the percentage of the interview workload replaced by the quality assurance interviews. Although we recognize that the quality assurance program was designed primarily to detect falsification, the definition of quality assurance “failure” used by the Bureau excluded the sources of error other than falsification. After receiving the Bureau’s response, we discussed this with the Associate Director for Decennial at the Bureau, who concurred that the Bureau should consider a broader definition in the future. We have added an additional recommendation for executive action accordingly. In responding to our recommendation to determine whether sufficient overlap occurred to violate the independence assumptions, the Bureau referred to its recent evaluation of the possible contamination of census data collected in A.C.E. blocks, as well as several other similar studies throughout the decade. Some of these studies find weak or only limited indications of contamination of census data in prior censuses, and they all conclude that there was no systemic contamination of census data. The Bureau’s most recent evaluation, which is consistent with our recommendation, was released after our audit work was completed. We have revised the draft accordingly. We are sending copies of this letter to other interested congressional committees. Please contact me on (202) 512-6806 if you have any questions. Other key contributors to this report are included in appendix II. The following are GAO’s comments on the Department of Commerce’s letter dated October 5, 2001. The Bureau generally provided minor technical corrections and additional information. The Bureau also clarified some of our key points and recommendations, which we have reflected in this report and comment on further below. 1. The Bureau noted that figures used throughout the draft report appeared to be inconsistent. We met with Bureau officials and determined that the apparent discrepancies were due to several factors, including the following: (1) our data included Puerto Rico while Bureau data covered only the 50 states, (2) the Bureau initially miscounted the total number of local census offices in its comparison, (3) Bureau results include data from 1,004 quality assurance interviews that replaced the data for the initial interviews, and (4) an error exists in how the Bureau’s cost and progress data, upon which we relied, report the number of proxy interviews. In some cases, the Bureau provided us with additional data, and this final report reflects minor changes based on that new information. None of these data changes were significant enough to affect either our conclusions or recommendations. See also comment 13. 2. The Bureau suggested that additional detail be included in figure 1 to indicate that both housing unit matching and person matching operations comprised separate clerical and field follow-up components. We recognize the complexity of those matching operations and will be issuing a separate report on the person matching operation soon. However, to maintain clarity in the figure, we chose not to include such additional detail on the A.C.E. operations that were not the subject of this report. 3. Throughout its response the Bureau suggested various revisions, technical corrections, and clarifications. We revised the report accordingly. 4. The Bureau provided additional information on its decision to permit overlap between census coverage improvement follow-up and the A.C.E. person interviewing operations. The Bureau pointed out that (1) the decision was a conscious one, made in advance of person interviewing, (2) delaying person interviewing until after coverage improvement follow-up was completed in a local census office area would introduce considerable risk, (3) managing person interviewing on levels of geography below the local census office area would have been impossible, (4) A.C.E. designers believed that the number of coverage improvement cases in an area would not have an effect on A.C.E. data collection, (5) there was never any intent to have operational controls in place between these two operations, and (6) ad hoc procedures without prior headquarters approval were prohibited for A.C.E. We recognize this context, and revised the main body of the report to better reflect it. Nevertheless, while the Bureau’s response helps explain why it did not change its procedures or add any new controls or communications when it noticed the increase in the follow- up workload over what was projected, as we note in the report, the increase in workload increased the risk that the independence assumption was violated. There may still be opportunity to implement steps in the future to help ensure the independence of such surveys. 5. The Bureau's St. Louis, Missouri, office should be deleted from the list of Census Bureau regional offices and replaced with Kansas City, Kansas. We revised the text accordingly. 6. The Bureau noted that factors other than the number of people entering their phone numbers on census forms could have accounted for the higher share of person interviewing completed by telephone than was expected. The Bureau suggested that the higher than expected mail return rate of census forms was also a likely factor in the higher telephone interview rate, since this also could have increased the pool of census forms possibly having telephone numbers recorded on them. Our explanation in the draft report was based on interviews with senior Bureau staff in the field division. However, we agree that the mail return also helps explain the higher telephone interviewing rate, and revised the draft accordingly. 7. The Bureau noted that proxy interviews are known indicators of insufficient data quality and not fabrication, as our draft report had suggested. The Bureau noted that indicators of falsification included missing telephone numbers and work days with more than 13 cases. Our draft report was based on language in Bureau training documents for field managers; however, we revised the text to reflect the Bureau comment. 8. The Bureau objected to our comparison of suspected falsification rates in the 2000 Census to those obtained at the three 1998 Dress Rehearsal sites, since the three sites were not representative of the nation. While we agree that the sites are not representative of the nation, and we revised the report to clarify this, we believe that the Dress Rehearsal comparison can provide a reasonable benchmark for regions since, as the Bureau notes, none of the Dress Rehearsal sites were “exceptionally hard-to-enumerate,” and the Bureau believes that hard- to-enumerate areas tend to have higher rates of falsification. We revised the draft to note that half of the regions had falsification rates that fell into the low range of the Dress Rehearsal sites, even though each region contained hard-to-enumerate areas. 9. The Bureau commented that figure 4, which illustrates the regional rates of quality assurance coverage compared to regional rates of suspected falsifications, was “very misleading.” The Bureau said that the figure appeared to attempt to demonstrate whether supervisors were properly following up on cases suspected of falsification. That is not our intent, and our draft did not contain such an implication. We noted in our draft report that the percentage of the interview caseload selected for quality assurance review was expected to vary depending on a number of local circumstances. We reported data on falsification rates only as an example, and because they had been cited as a primary local circumstance during earlier interviews with Bureau staff. 10. The Bureau commented that its quality assurance program did in fact measure whether cases contained errors due to honest interviewer mistakes, poor respondent recall, or reasons other than falsification. The Bureau noted that for all replacement cases, it determined whether cases were falsified or fell into the other categories. We revised our report accordingly. However, the quality assurance failure rate that the Bureau calculated and reported includes only those interviews replaced for falsification. We recognize that the quality assurance program was designed primarily to detect falsification, but this definition of “failure” excludes the other sources of rejected interviews and thus understates the rate at which interviews failed to meet Bureau quality standards. After receiving the Bureau’s response, we discussed this with the Associate Director for Decennial at the Bureau, who concurred that the Bureau should consider the broader definition in the future. We have added an additional recommendation for executive action accordingly. 11. The Bureau commented that while imputation for missing data undoubtedly leads to some error, the Bureau had numerous studies over the years showing that its imputation procedures had acceptable error levels. While this explains why the presence of missing data in person interviewing should not by itself be alarming, it does not justify ignoring levels of missing data, the operational quality measures that contribute to missing data, or the methods chosen by the Bureau to deal with missing data. For example, the Bureau recently reported that a variety of alternative statistical models for dealing with missing data gave a wide range of results implying widely varying effects on A.C.E. estimates. The same report suggested that further research was needed to study these effects. The Bureau commented that local variability in data quality indicators is unavoidable given variations in local populations and localities. We agree and in our draft report noted that data quality did in fact vary during person interviewing. The Bureau commented that for most surveys, comparing quality indicators of certain regions would be of little value. The regional comparisons that we made in our report are across the 12 census regions. With the exception of our inclusion of Puerto Rico data with the Boston region data—the census region in which data collection in Puerto Rico was managed and under which census operational data was tabulated in data provided to us by the Bureau—the Bureau reported comparisons of data across the same regions, and of many of the same variables, in a technical memorandum it published in March 2001. The Bureau suggested that comparisons controlling for demography and geography would be more appropriate to assess the extent of local quality variability. Given the Bureau’s acknowledgment that quality variability is unavoidable and that our presentation of local census office area data corroborates that subregional variability exists, we believe that additional comparisons like those suggested by the Bureau are unnecessary to make the general point that variation exists. 12. The Bureau believed that our conclusion linking variations in data quality to possible effects on the accuracy of A.C.E. results was unsubstantiated and suggested wording it as a question. In our draft report we raised the link as a possibility and then recommended that the relationship, if any, be determined between operational measures and the accuracy of A.C.E. estimates. We believe that the conclusion is logical given our prior work linking data quality measures such as missing data rates to errors in A.C.E. results, and provided additional support for making this link. 13. The Bureau noted that there were 52 local census office areas that had to complete over 10 percent of their total caseload with proxy respondents, and not the 42 that we had reported in our draft report. Before receiving the Secretary’s response, the Bureau had provided us with additional data. Based on that later data, we counted 49 local census office areas that had to complete over 10 percent of their total caseload with proxy respondents, and we revised the draft text and related figure 5 accordingly. See also comment 1. 14. The Bureau said that our conclusion that the influence of census and A.C.E. overlap may have been significant in some local areas was unsubstantiated. As we noted in the draft report, the Bureau officials we interviewed were unaware of any Bureau attempts to determine the extent of any possible interview overlap in 2000. Such data, if available, might demonstrate whether A.C.E. assumptions were operationally supported in the field. We saw no data on the impact of the overlap that occurred, and we revised the text to state more clearly that the influence of the overlap was a potential one. See also comment 15. 15. The Bureau acknowledged the importance of extensive evaluations of the A.C.E., and referred to the evaluation it is undertaking. We look forward to reviewing this evaluation when it is complete. 16. The Bureau referred to its recent evaluation of the possible contamination of census data collected in A.C.E. blocks, as well as several other similar studies throughout the decade, and saw no need for further work. Some of these studies find weak or only limited indications of contamination of census data in prior censuses, and they all concluded that there was no systemic contamination of census data. The Bureau’s most recent evaluation, which is consistent with our recommendation, was released after our audit work was completed. We have revised the draft accordingly. 17. The Bureau said that it was reassessing its approach to coverage measurement. The Bureau gave assurance that it would appraise these recommendations with respect to the approaches under consideration. We look forward to reviewing this appraisal when it is complete. In addition to those named above, Ty Mitchell, Lynn Wasielewski, Angela Pun, Richard Hung, Janet Keller, Lara Carreon, and staff from our Denver, Los Angeles, Norfolk, and Seattle offices contributed to this report. | As part of its Accuracy and Coverage Evaluation (ACE), the U.S. Census Bureau interviewed people across the country to develop an estimate of the number of persons missed, counted more than once, or otherwise improperly counted in the 2000 census. In conducting the interviews, which took place in person or over the phone, Census faced several challenges, including (1) completing the operation on schedule, (2) ensuring data quality, (3) overcoming unexpected computer problems, (4) obtaining a quality address list, and (5) keeping the interviews independent of census follow-up operations to ensure unbiased estimates of census errors. The Bureau completed the interviews largely ahead of schedule. On the basis of the results of its quality assurance program, the Bureau assumes that about one-tenth of one percent of all cases nationally would have failed the program because they were believed to have been falsified. Early on, the Bureau dealt with an unexpected problem with its automated work management system, which allows supervisors to selectively reassign work among interviewers. According to the Bureau officials, the Bureau addressed the underlying programming error within two weeks, and the operations proceeded on schedule. The address list used for interviews had fewer nonexistent listings than did the lists used by the major census questionnaire delivery operations. An accurate address list is important to prevent unnecessary and costly efforts to locate nonexistent addresses. Although the Bureau implemented controls to keep the nonresponse operation separate from the interviews, the assumed independence of the census and ACE was put at risk because another follow-up operation intended to improve census coverage overlapped with the interviews. |
The UI system is a federal-state partnership. Within overall federal guidelines, states operate their own UI programs, levy and collect their own payroll tax, and determine the level and duration of benefits and the conditions for benefit eligibility. However, the federal government, through the UI Service, a part of ETA, is responsible for maintaining the fiscal integrity of the system, including the individual state UI program trust funds. The UI Service provides information, guidance, and technical assistance to programs in the 50 states, the District of Columbia, Puerto Rico, and the Virgin Islands. ETA officials reported that the UI Service also monitors state methods for implementing administrative procedures, comments on revised state procedures and revisions to the states’ Handbook of Operating Procedures, and reviews one-third of the states’ programs annually for compliance with program requirements for the payment of benefits to exservice personnel and other federal employees. The UI system includes several programs that cover most public and private sector workers. The regular UI program provides for up to 26 weeks of benefits to qualifying unemployed private sector and state and local government employees. Under this program, each state maintains an account in the U.S. Treasury, which is funded by state payroll taxes on most private sector employers and by payments from state and local government and some nonprofit employers reimbursing the fund for benefits paid to former employees. The Unemployment Compensation for Federal Employees (UCFE) program is completely federally financed and provides benefits to qualifying unemployed civilian federal employees. The Unemployment Compensation for Ex-Servicemen (UCX) program is also completely federally financed and provides benefits to service members after their discharge from active duty as well as Reserve and National Guard personnel who have been on active duty for 90 days of continuous service before their release from active duty. The UCFE and UCX programs are administered by the states under agreements with the Secretary of Labor. Eligible claimants may receive benefits from any one or a combination of these funds on the basis of the nature of the claimant’s prior employment. Most states reduce or offset the weekly benefit amounts paid to a claimant by a percentage of any income earned by the claimant during that week of program-covered unemployment. The type and amount of income used to offset benefits varies by state but typically consists of wages from part-time employment, including income received from a claimant’s active participation in the Reserve. In 1994, 48 states, the District of Columbia, Puerto Rico, and the Virgin Islands offset UI benefits by a portion of the income claimants received from Reserve service. Under current state UI laws, only private employers—because they pay a tax on some fraction of each employee’s wages—are required to regularly report wage information on their employees to states. In addition, states can impose financial penalties on private employers who fail to comply with wage-reporting requirements. Unlike private employers, however, because federal, state, and local government and some nonprofit employers are not assessed a payroll tax but rather reimburse state UI programs for any benefits disbursed to their former employees, they are not required to report employee wage information to the states and other jurisdictions that operate UI programs. Personnel and payroll data for the Reserve forces are maintained at six different locations. Each of the four military services (Army, Air Force, Naval Reserve, and Marine Corps) collects and processes personnel and payroll information from their respective active and Reserve units at one of the four centers operated by the Defense Finance and Accounting Service (DFAS) throughout the nation; the Coast Guard maintains its data at the Department of Transportation’s United States Coast Guard (USCG) Pay and Personnel Center. Although the DFAS centers and the USCG Pay and Personnel Center are responsible for collecting and maintaining current data, aggregate historical payroll and personnel information for all services is maintained by a central repository at the Defense Manpower Data Center (DMDC) in Seaside, California. Some of our previous audits have repeatedly identified problems associated with Defense’s military pay systems. In particular, we have cited the payroll systems operated by the DFAS centers as inaccurate, unreliable, and duplicative, resulting in a waste of federal resources and an impairment of Defense operations. For example, we found that the DFAS payroll system for active duty Army military personnel did not accurately summarize and report payroll information. Thus, DFAS has been unable to ensure that it reports accurate information to the Army, much less to the Internal Revenue Service (IRS) and other federal and state agencies. Part of the problem may stem from each service maintaining a unique and independent payroll system. As a result of our earlier findings and recommendations in reports citing weaknesses in Defense’s financial management systems, DFAS has been tasked with integrating the individual services’ personnel and payroll systems into one standardized system. However, the Military Pay Directorate at DFAS stated that the Marine Corps is the only service that has made any progress toward this goal and that this integration effort will take several years to implement. Furthermore, this effort may still not include the standardization of all the payroll and personnel systems once it is completed. Our analysis of Reserve payroll and UI benefit data for seven states that account for 27 percent of all Reserve personnel shows that UI claimants who have been active participants in the Reserve did not report over $7 million in fiscal year 1994 program-covered Reserve income. This nonreporting resulted in estimated UI benefit overpayments of $3.6 million to over 11,500 Reserve personnel during fiscal year 1994. Thirty-two percent or $1.2 million were federal trust fund losses primarily from the UCX program. According to Labor officials, this suggests that exservice personnel who were active in the Reserve accounted for a disproportionate amount of nonreported income detected compared with reservists separated from private or other public employers. Although we did not examine programs in the remaining states, the District of Columbia, Puerto Rico, and the Virgin Islands, there are several reasons to expect that the total federal trust fund losses are much higher than $1.2 million. First, these remaining programs cover about 73 percent of the nation’s reservists. Most of these programs treat Reserve income under their laws and program procedures in a manner similar to the seven states we reviewed. In addition, we used an extremely conservative method for estimating the nonreporting of Reserve income and associated overpayments in the seven states. For example, we excluded from our analysis all reservists who reported any earnings. Thus, we did not estimate the nonreported income and benefit overpayments generated by claimants in the Reserve who may have declared income to their UI program from some source but who did not report all or part of their Reserve income (that is, declared income from their annual training period but not their weekend drills). For more information on our methodology, see appendix I. Although the amount of nonreported income and associated trust fund losses may be small in comparison with the billions of dollars in total annual program benefits, the existence of overpayments is enough to raise concerns about the effectiveness of the fiscal control exercised over the UI system. According to state and federal program officials we interviewed, the integrity of the UI system is adversely affected whenever claimants are improperly paid benefits, either through oversight or fraud. These unnecessary payments erode the UI system’s ability to provide benefits to those workers who are unemployed through no fault of their own. They contribute, if only marginally, to higher state employer payroll taxes and federal outlays and possibly lower claimant benefit levels than would otherwise prevail. State officials cited various reasons why claimants may not be reporting their Reserve income while receiving UI benefits. According to state officials, the claimants may not understand the reporting responsibilities, are not specifically informed of these responsibilities, and may have incentives not to report all Reserve income—incentives that are amplified by the states’ limited ability to detect nonreporting. Many claimants may be unaware or have misconceptions about their UI program’s income-reporting requirements. Some federal and state UI program officials told us that many claimants believe that Reserve participation does not affect their ability to seek work or to fill their “regular hours of duty”—a key condition for UI benefit eligibility. Thus, they may believe that Reserve participation does not constitute employment with reportable income. For most claimants, income from Reserve service is often earned part-time on weekends, rather than through full-time employment Monday through Friday. Thus, reservists may also believe that the small amount of earnings from Reserve participation they receive relative to their primary employment earnings may not be regarded as reportable income. State program officials also believe that some participants view their Reserve service as an instance of civic duty and patriotism rather than employment; thus, they do not consider compensation received from Reserve participation to be reportable income. DFAS officials noted that they could help notify Reserve personnel of their income-reporting responsibilities regarding state UI benefits by informing them of their duties in a note on their leave and earnings statements. Most UI programs throughout the nation require prospective claimants to report all expected earnings—including Reserve income—received during the benefit period as well as all earnings received during the base period.However, state program claims processors in the states included in our review told us that they do not specifically ask claimants whether they are receiving Reserve income and most do not inform claimants of the Reserve income-reporting requirement in writing. Federal and state program officials we interviewed believe that procedures in these seven UI programs were typical of the procedures and materials used in UI programs generally, with most programs not providing explicit information to claimants about their reporting responsibilities regarding Reserve income. None of the UI application forms in the seven states inquired about applicants’ receipt of Reserve income. For example, although the application forms for five of the seven states contained questions relating to military service, none included a question regarding Reserve income. California and Georgia forms asked if the prospective claimant had served in the Armed Forces during the past 18 months. Florida and Texas forms asked if the prospective claimant was in the military service. Massachusetts asked if the prospective claimant was a veteran and about active duty. Colorado and Pennsylvania forms did not contain any questions regarding military service. Though all seven states provided illustrations in their UI program brochures and handbooks of the type of earnings claimants must report, only two states—Colorado and Massachusetts—provided any material that explicitly mentioned that claimants must report Reserve income. In addition, the UI handbook, a key source of program information provided to all prospective claimants by the UI offices in all the states we reviewed, generally did not elaborate on the types of income that need to be reported. Only the Massachusetts handbook specifically addressed Reserve pay, stating that this income must be reported. To maintain continuation of their benefits, state UI offices routinely ask claimants to recertify their unemployment status and to report any income they receive during their benefit period. None of the recertification forms for the seven states we visited specifically asks claimants whether they are receiving Reserve income. Some state UI officials explained that they do not believe the application forms are a reason for reservists not reporting income. They said that these application forms have been streamlined over the years and that they believe the current questions on active military service are sufficient to remind an applicant to report Reserve pay. However, most of them also agreed that their handbooks could be more specific in instructing the applicants to report Reserve income. ETA officials believed that including questions on the application forms that only refer to military service is not sufficient to identify Reserve income. In particular, they were concerned that state UI workers may incorrectly assume that claimants include information on Reserve income when they answer the written application questions regarding active military duty and not pursue this issue further when initially screening prospective claimants. ETA officials were also concerned that because Reserve income is not derived from active military duty, applicants may knowingly or unknowingly evade disclosure of Reserve income when answering the questions on military service. ETA officials regarded the application procedures and guidance used in the seven states we visited as typical of most states and thought that most programs’ procedures should be more specific to elicit information about Reserve income. They believed that more states should list Reserve pay specifically as a type of income that needs to be reported at the time of application and that this income should be listed in the UI handbook provided to prospective claimants. In most states, income earned by UI claimants above a minimum level, including wages from Reserve service, offsets or reduces their weekly UI benefit amounts. Because claimants will receive reduced UI benefits by reporting Reserve income, there is an incentive not to report this income. For example, a claimant who is eligible to receive the maximum weekly benefit amount of $250 under the Florida UI program and who also receives $100 in weekend Reserve drill pay would see his or her weekly UI benefit reduced by $66 and receive $184 for that week. In addition, in Florida as in most states, a claimant participating for a full week of annual Reserve training would be completely ineligible for UI benefits during that week. Further, claimants appear to face little risk of detection if they do not report Reserve income. Despite state penalties for fraud, including reduction or loss of benefits, state UI officials believed that in many states claimants face little risk of detection if they do not report Reserve income. To enforce such penalties, state UI programs must match reported claimant income with Reserve earnings. However, unlike private employers who must routinely report quarterly wage and employment information on all employees when they remit their payroll taxes, federal employers such as the Reserve have no such requirements because they reimburse states for UI benefits paid as they occur. Thus, although UI programs have on-line access to private sector employee wage data to verify benefit levels and duration, they have no comparable access to federal wage records. Without ongoing access to federal wage data for all reservists, states must conduct periodic matches of UI claimant data with Reserve personnel and payroll information to detect the nonreporting of income. In our discussions with state UI program officials we found that states face several obstacles to conducting effective matching operations. These obstacles include UI programs’ lack of awareness of the availability of automated Reserve personnel records, difficulties in obtaining Reserve payroll records from the DFAS centers, and limited assistance from Labor. DFAS’ inability to provide payroll information may actually reflect deficiencies in its automated payroll data system. Few states have attempted to detect the nonreporting of Reserve income in any systematic manner. Our discussions with federal and state UI program officials identified only three state programs—Colorado, Pennsylvania, and Texas—that have conducted such efforts in recent years and no program that has attempted to detect nonreported Reserve income on a routine basis. Two of the state programs that matched UI program and Reserve personnel and payroll data indicated that they were unaware of the availability of automated information from DMDC that could have expedited their effort. Without this information, these states resorted to slower, manual matches of information. Pennsylvania and Colorado—unaware that automated Reserve personnel rosters were available from DMDC—requested printed personnel rosters. State program analysts then manually matched Reserve personnel rosters with state UI claimant files—a time-consuming and labor-intensive procedure. (See app. II.) State officials also told us of their difficulties in obtaining payroll records from DFAS. For example, Colorado requested automated payroll records for all reservists from DFAS but never received them. The state then asked the individual DFAS centers responsible for Army and Air Force payment data for Reserve files; eventually the state received printed leave and earnings statements, which necessitated a manual file match. The Texas state UI program experienced similar difficulties. States’ experiences in working with DFAS in the past are similar to our own efforts to obtain Reserve payroll information to match fiscal year 1994 state UI claimant data with Reserve personnel and payroll records for seven states. Although the centers for the Coast Guard and the Marines Corps provided us with complete Reserve wage data within 1 month, we experienced great difficulty obtaining comparable data for the other Reserve services. It took almost 5 months before we received limited data on Naval Reserve personnel; the Cleveland DFAS center was unable to provide the actual wages for the Naval Reserve personnel. The center provided us with the dates worked and the military pay grade for each reservist. Because actual wages earned were not provided by the Cleveland DFAS center, we had to reconstruct the amount of wages, using pay charts that showed monthly wages by pay grade, to determine the amount of wages that reservists earned on a specific date. Despite three attempts over 8 months, DFAS was never able to provide us with automated wage information for the Army and Air Force—about 70 percent of total Reserve personnel. The Denver and Indianapolis DFAS centers together were unable to create for us a data tape linking the amount of wages earned by Army and Air Force reservists to any of the specific dates we requested. For this reason, we ultimately had to rely on data that the DFAS centers’ staff manually extracted from printed and microfiche payroll records. The Military Pay Directorate at DFAS stated that the lack of an integrated personnel and payroll system contributed to its difficulties in providing us accurate and timely wage data for the time periods and individual reservists we identified. According to DFAS officials, we did not receive payroll data for the Army and Air Force Reserve personnel requested because they were no longer available. Officials said that the centers typically maintain the payroll information we requested for about 400 days, after which time it is downloaded onto microfiche. Although we had requested our information well within the period during which the information was accessible, by the time DFAS said it was prepared to provide the data to us, officials stated that Reserve data files had already been downloaded. We ultimately obtained payroll data for a sample of matches, which DFAS center personnel constructed from microfiche and printed records. Labor has generally not facilitated the matching of Reserve and UI data and does not assist states in obtaining payroll data from the DFAS centers or DMDC. ETA officials told us that they have not received requests from states to assist in matching Reserve income and UI benefit data and they have concentrated on providing assistance in other areas, such as compliance with federal requirements regarding the payment of UCFE claims. However, they acknowledged that this is an important area and one where they could provide additional assistance to the state programs. Although the three states conducting matches were ultimately able to detect nonreported Reserve income and UI benefit overpayments, the difficulties they encountered in doing so led to their decision to discontinue these efforts. Although none of the officials we talked to in these states had plans to continue matching, they said they would likely reconsider their decision if either better guidance and assistance from Labor or increased responsiveness from DFAS was forthcoming. State and federal UI program officials and Defense officials suggested several options to reduce the nonreporting of Reserve income, which could prevent future trust fund losses. These options focus on more effective ways to inform claimants about their reporting responsibilities and proposals to improve the detection of nonreported income. Most federal and state program officials believe that these options could be implemented administratively. Defense, Labor, and state UI officials suggested several ways to improve claimant awareness of their responsibility to report Reserve-related income. For example, DFAS officials suggested that their agency could help notify Reserve personnel of their income-reporting responsibilities administratively by informing them of their duties in a note on their leave and earnings statements. Labor and state UI officials mentioned that programs could improve claimants’ awareness of their reporting responsibilities by revising their application forms and handbooks to specify clearly their programs’ treatment of Reserve pay. State UI program officials generally agree that acquiring access to Reserve personnel and payroll data could facilitate the detection of nonreported Reserve income, although they identified a variety of suggestions on the best way to obtain such access. For example, some officials suggested that states obtain automated records of Reserve personnel and payroll data annually from DFAS to enable matching on a regular basis. Other officials believe that such regular data access is unnecessary as long as the appropriate Defense agencies—DMDC and DFAS—respond quickly when the states request wage information. The most frequent alternative suggested by federal and state officials is to require Defense to report Reserve payroll and personnel data to states on a quarterly basis, as private sector employers are required to do, to permit verification of claimant income on a regular basis. Officials agree that this change could be implemented as an administrative action; no legislative change would be needed. Some Labor officials believe that providing states with wage records should be a requirement for all federal employers. They believe that the nonreporting of program-covered income by federal employees generally is a far greater and more serious problem than nonreporting by reservists alone. Thus, state access to federal wage and personnel information could significantly reduce the amount of nonreported income and the associated benefit overpayments by claimants separated from any federal employer. Almost all the Department of Defense officials we interviewed, including those representing the various Reserve components, do not believe that the reporting of Reserve wage income for UI benefit computation purposes would have a detrimental effect on their ability to recruit and retain effective Reserve forces. However, they also prefer that states exempt Reserve income from any UI offset requirements. Despite the revenue loss to state UI programs, they believe that reservists should not be penalized through the reduction of UI benefits paid for an otherwise legitimate claim, because reservists are performing an important national public service. They note that although most states have not exempted Reserve income in calculating UI benefits, some states have; Colorado, for instance, has exempted Reserve weekend drill and annual training income from state UI program offset provisions. The nonreporting of Reserve income results in the annual loss of millions of dollars in state and federal UI benefit overpayments. Some nonreporting is attributed to claimants being unaware of their reporting responsibilities. To better inform claimants of their reporting responsibilities, we recommend that the Secretary of Defense direct the four DFAS centers to notify all reservists of their income-reporting responsibilities with respect to state UI benefits in a message included on their leave and earnings statement. We also recommend that the Secretary of Transportation direct the USCG Pay and Personnel Center to notify all reservists of their income-reporting responsibilities with respect to state UI benefits in a message on their leave and earnings statement. ETA, in meeting its oversight responsibility for the financial integrity of the UI system and in providing guidance and technical assistance to the state UI programs to enhance their operations, can also help to improve compliance with state income-reporting requirements. In particular, we recommend that the Secretary of Labor direct ETA’s UI Service to provide assistance and encourage state UI programs to review the administrative forms or procedures used to gather information about a prospective or continuing claimants’ wages, making revisions as necessary to clearly identify to claimants the types of Reserve income they must report for the offset of benefits. To reduce income nonreporting and the associated benefit overpayments effectively, states also need better and more timely access to Reserve payroll and personnel data. Obtaining such data could help detect nonreported Reserve income. In addition, Transportation, Defense, and state UI program officials believe that providing states with Reserve data would have little or no impact on service retention rates. We recognize that to be successful in this effort Defense agencies must be able to provide accurate payroll and personnel information in a timely fashion. For this reason, we recommend that the Secretary of Defense direct the DMDC and the four DFAS centers to develop a process for giving states Reserve personnel and payroll data in a timely, economical, and efficient manner. In doing so, they should coordinate with Labor’s UI Service to identify states’ needs. In addition, we recommend that the Secretary of Transportation direct the USCG Pay and Personnel Center to develop a process for giving states Coast Guard Reserve personnel and payroll data in a timely, economical, and efficient manner. In doing so, it should coordinate with Defense’s DMDC and with Labor’s UI Service to identify states’ needs. We obtained comments on our draft report from the Departments of Labor, Defense, and Transportation. Labor and Defense provided written comments, which appear in appendixes III and IV. Labor generally agreed with the information provided in the report and noted that it was already taking steps to implement our recommendation to assist states in their review of procedures identifying prospective or continuing claimant’s wages. The Assistant Secretary of Labor also provided technical comments that have been incorporated into the background section of this report. The Department of Labor did question how improper benefit payments could contribute to higher state employer payroll taxes and possibly lower claimant benefit levels. Although we stated that such effects would be quite marginal, to the extent that overpayments were reduced, state legislatures could choose to use those savings to incrementally raise claimant benefit levels or reduce employer taxes. The Department of Defense also generally concurred with the findings and agreed to take actions to implement our recommendations and provided methodologies and completion dates for accomplishing these actions. Department of Transportation officials representing the Coast Guard did not take issue with the overall findings of the report. The Transportation Program Manager for the USCG Pay and Personnel System and other officials agreed that some steps could be taken to assist states in detecting overpayments to reservists. They stressed, however, that actions taken in response to the recommendations should be cost-effective. This concern is addressed in our recommendation to the Secretary of Transportation to direct the USCG Pay and Personnel Center to develop a process for giving states Coast Guard Reserve personnel and payroll data in a timely, economical, and efficient manner. Transportation’s suggested approach could address the concerns raised by our recommendations. Transportation officials also indicated that on the basis of the data presented in the draft of our report, it could be inferred that no UI reporting concerns were identified for about 95 percent of all reservists. Officials suggested that the report could explicitly mention the cooperation of the vast majority of reservists with the UI program. One Transportation official also stated that the report does not offer a basis of comparison for the unfamiliar reader to evaluate and understand the relative significance of the issues identified. For example, he stated that the data presented indicate that the UI overpayments of $3.6 million identified in our sample were spread over about 275,000 reservists in the seven states we reviewed. As a result, the average overpayment per reservist is about $13 per year. A similar calculation for the federal share of UI overpayments results in an average share of about $4.25 per reservist per year. He stated that these calculations are not to trivialize the significance of the overpayments, but rather to provide perspective that could be useful in identifying appropriate remedial actions. Regarding reservists’ compliance with UI program reporting responsibilities, we focused only on those reservists receiving UI benefits who did not report any Reserve income. We did not analyze those reservists receiving UI benefits who only partially reported Reserve income nor those in full compliance with income-reporting requirements. Consequently, although it is likely that most reservists are in compliance, our data do not permit us to say that no UI reporting concerns would be identified for the remaining reservists. Finally, we believe our findings of $7 million in nonreported income and $3.6 million in overpayments represent absolute amounts and actually understate the loss because the seven states account for only 27 percent of all reservists. Also, we do not believe that the average overpayment per reservist is a meaningful statistic for assessing the significance of the problem. The existence of overpayments is enough to raise concern about the effectiveness of the fiscal control exercised over the UI program. Failure to rectify the problem erodes the integrity of the UI program and it is important that action be taken to correct the problem. We are sending copies of this report to the Secretaries of Labor, Transportation, and Defense and UI program directors in California, Colorado, Florida, Georgia, Massachusetts, Pennsylvania, and Texas. Major contributors to this report are listed in appendix V. If you have any questions concerning this report, I can be reached at (202) 512-7014. We matched fiscal year 1994 UI claimant data from seven selected states with Reserve force payroll and personnel data to estimate the amount of nonreported Reserve income and benefit overpayments and associated losses to the federal and state UI trust funds. Although we did not independently verify the accuracy of the data provided to us by DMDC, the DFAS centers, the USCG Pay and Personnel Center, or the seven state UI programs, we believe that this does not affect our results. The data sources we used were the only ones available, and state UI programs would rely on these data sources to calculate any benefit overpayments. We obtained data on all persons who received some UI benefit payment (regular, UCX, or UCFE) between October 1, 1993, and September 30, 1994,from seven state UI programs—California, Colorado, Florida, Georgia, Massachusetts, Pennsylvania, and Texas. We judgmentally selected these states using a variety of characteristics including a high unemployment rate during fiscal year 1994 (California), a large number of reservists (Georgia, Texas, Pennsylvania, Florida, and previous experience with matching UI claimants and Reserve force data (Colorado, Pennsylvania, and Texas), and geographic balance. We requested most of the states’ UI claimant tape files before our site visits to these states. To assist them in their preparation of these tapes, we held discussions with state officials and provided them a structured protocol listing the types of data and the magnetic tape format they would need to provide us. The tapes we received contained the names of UI claimants, their social security numbers, and other information related to the benefits paid. We created tapes containing only the social security number extracted from the original state data tapes. Consistent with the requirements of the Computer Matching and Privacy Protection Act of 1988, we sent the tapes to and worked with officials from DMDC in Seaside, California, to match the seven state data tapes of UI claimants with Defense’s Reserve personnel and payroll records. DMDC agreed to match these tapes (with our on-site supervision) with Reserve personnel employment data to identify UI claimants who were in the Reserve. Before the visit, we worked with DMDC technicians to coordinate tape format between DMDC and the seven state programs. We developed a computer program that would identify those persons who received UI benefits for the same period as they received Reserve pay but who did not declare such income for benefit offset. Once matches were identified, DMDC segregated the data and placed the information on tapes according to the military service branch in which reservists were employed. Because DMDC does not maintain payment data on the dates of service for when payments are earned, to facilitate our match, it sent the tapes to the particular payroll centers responsible for the payroll of these personnel’s respective service branch. In total, we requested payroll information on the number of personnel in each Reserve component from the payroll centers listed in table I.1. Most of the state UI programs were able to provide us with the tapes within a month and DMDC was able to perform the matches within a few weeks after our request. In helping us match fiscal year 1994 state UI claimant data with Reserve personnel and payroll for seven states, only the USCG Pay and Personnel Center and the Kansas City DFAS center were able to provide complete automated information for the Coast Guard and the Marine Corps components, respectively. However, it took almost 5 months for the Cleveland DFAS center to respond to our request for fiscal year 1994 Naval Reserve personnel payroll data. Then, the DFAS center was unable to provide the actual wages for the Naval Reserve personnel and instead provided pay scales that we had to convert to wage amounts. Despite three attempts over an 8-month period, the Denver and Indianapolis DFAS centers representing the Air Force and Army Reserve components were never able to provide us with accurate and complete payroll information on personnel matched. Thus, to complete our assignment we developed an alternative methodology relying on a sample of these Reserve components and based on data that DFAS center staff manually extracted from printed and microfiche payroll records. To estimate the amount of nonreported income and benefit overpayments to UI claimants in the Army and Air Force Reserve components, we selected a random sample of matched personnel—UI claimants from each of the seven states who also were employed by the Army or Air Force Reserve during the same period of time. We then provided the social security numbers and dates of wages received for each of these reservists to the DFAS centers that manually reconstructed, by searching microfiche, the corresponding payroll information. We appended the associated DFAS or USCG centers’ payroll data to each matched UI claimant (a particular Reserve member identified as receiving both UI benefits and Reserve income for a certain time period). We then applied the appropriate state UI program offset provision to calculate the amount of nonreported program-covered income and, using the claimant’s eligible weekly benefit amount, we estimated the magnitude of the overpayment. Through this process, we estimated the cumulative amount of nonreported Reserve income and UI benefit overpayments for fiscal year 1994. From our analyses, we estimated that the seven states we reviewed made millions of dollars in UI overpayments to UI claimants who were active Reserve participants. Table I.2 shows the breakdown of nonreported income and overpayments by branch of service. Overpayments (total UI benefits) Federal loss (UCFE, UCX, and EB)State loss (UI) Army National Guard and Army Reserve Air National Guard and Air Force Reserve level, range from – 16.7 percent for federal losses due to Army overpayments to – 5.2 percent for federal losses due to Air Force overpayments. Although we did not examine programs for the remaining states, the District of Columbia, Puerto Rico, and the U.S. territories, there are several reasons to expect that the total federal trust fund losses are much higher than $1.2 million. First, these remaining programs cover 73 percent of the nation’s reservists. Most of these programs treat Reserve income under their laws and program procedures in a manner similar to the seven states we reviewed. In addition, we used an extremely conservative method for estimating the nonreporting of Reserve income and associated overpayments in our seven states. In calculating the amounts of nonreported earnings and UI overpayments, we excluded from our analysis all Reserve claimants who reported any earnings. About one-third of the reservists receiving UI during fiscal year 1994 reported some earnings. However, we were unable to determine if these earnings were Reserve pay or other types of income because state UI program data files do not include the source of income listed on the application form. Thus, we did not estimate the nonreported income and associated benefit overpayments generated by claimants in the Reserve who may have declared income to their UI program from some source but who did not report all or part of their Reserve income (that is, declared income from their annual training period but not their weekend drills). To identify explanations for the nonreporting of Reserve income and possible options to enhance reporting, we spoke with UI officials from seven states: California, Colorado, Florida, Georgia, Massachusetts, Pennsylvania, and Texas. These officials include UI program directors and administrators, benefit payment and quality control unit staff,application clerks and reviewers, and computer staff who conduct matches of UI program information with claimant income and other data. In addition, state officials provided us with copies of their programs’ benefit applications and continued claims forms as well as handbooks and other publications used to explain program eligibility and benefit payment requirements. Reservists receive pay for several types of activities, including monthly weekend drill sessions and a 2-week annual training session. In addition, reservists can be activated for indefinite periods of time during a designated national crisis or domestic emergency. States offset UI weekly benefit amounts by certain types of income earned, including Reserve wages earned during the period of benefit receipt. Of the UI programs in the 50 states, the District of Columbia, the Virgin Islands, and Puerto Rico, only Oregon and Maine completely exclude all Reserve wages from benefit computation. State program requirements, including those of the seven states we visited, vary in the type of Reserve pay claimants must report and in the formula used to offset this income against weekly benefit payments (see table II.1). Although all seven states we visited offset weekly benefit amounts by a claimant’s earnings for the 2-week annual training session, only five states offset benefits for income earned from the monthly weekend drill sessions. For instance, although California requires that claimants report all income received from Reserve participation, the claimant’s benefit amount is not offset by income from monthly weekend sessions, according to state officials. States also allow claimants to earn a certain amount of income from part-time employment before reducing their UI benefits, disregarding certain amounts of part-time income in their offsetting of benefits. The exact amount varies by state; California, for example, disregards $25 of the first $100 per week of income earned and excludes 25 percent of earnings above $100 per week. Thus, a California UI claimant who received weekly part-time earnings of $150 would have $38 disregarded from any offset to his or her UI benefits (that is, $25 of the first $100, plus 25 percent, or $12.50—rounded to $13—of the remaining $50, or a total disregard of $38); thus his or her benefits would be reduced by $112 ($150-$38). Officials from all seven state UI programs and the Department of Labor told us that the nonreporting of claimant income from Reserve participation was a serious problem, even though they did not know the magnitude of the total dollars involved. These officials believe that they are responsible for preventing improper payments of UI benefits to claimants who are ineligible for such benefits or to claimants whose benefits should be reduced because of nonreported earnings. Accordingly, they believe that the integrity of the UI program is adversely affected when claimants receive improper benefits. These officials view their efforts to detect such overpayments as a means to deter future program abuses. State UI officials identified several explanations for the nonreporting of Reserve income and subsequent benefit overpayment. First, many claimants may not understand their responsibility to report Reserve income. Second, state efforts to inform claimants about UI program income-reporting requirements may be inadequate. Third, claimants may have incentives not to report all Reserve income. State officials also acknowledged that many UI programs could take additional steps to ensure that claimants are aware of program requirements regarding Reserve earnings. Most recognize that they have an opportunity to inform claimants of reporting responsibilities during the initial benefit application and the weekly or biweekly claim recertification. When applying for benefits, most applicants are required by states to report all expected earnings, including Reserve wages, to be received during the benefit period. However, during the application interview, state program officials generally do not ask claimants whether they are receiving income from Reserve participation. Also, because such wages are not included in any determination of an initial weekly benefit amount, applicants may leave UI interviews believing that they do not have to report Reserve income during subsequent periods for which they will be receiving weekly benefits. Nevertheless, only two states—Colorado and Massachusetts—explicitly asked that Reserve income be reported. In the remaining states we visited, no mention was made on the application form, recertification notice, handbook, or any of the other information or guidance given to the claimant of the requirement or need to report Reserve income. UI program officials also noted the financial incentives for claimants not to report any Reserve income, because such reporting will reduce weekly UI benefits in most UI programs. In cases where UI claimants deliberately fail to comply with state reporting requirements, states may invoke fraud statutes that allow them to attach financial penalties to money owed them. (See table II.2.) Once states determine that claimants have been overpaid as a result of their nonreporting of income, most states can withhold a portion or all of any future benefits owed the claimants until the overpayments have been paid back. Besides reclaiming overpayments from future benefits, one state—California—attaches a 30-percent penalty to fraudulent overpayments, and claimants are expected to pay this penalty in cash. Another state—Texas—discontinues benefit payments to claimants for the remainder of the benefit period and can disqualify claimants for benefits for up to 52 weeks after discovering that an overpayment has been made due to fraud. Despite penalties for nonreporting of income, state UI program officials and Labor officials reported that enforcement was difficult because Reserve wage and employment information is not readily available to states for use in verifying claimants’ earnings. Unlike private employers, who must routinely report quarterly wage and employment information for all employees (thus permitting states to determine whether claimants are accurately reporting their employment status and wages for UI benefit calculation), federal employers, including the Department of Defense and other civilian federal employers, are not required to do so. Without this information, states are unable to identify income being received from a federal employer when claimants apply for or receive UI benefits. In an effort to reduce overpayments and identify program fraud and abuse, all seven states we visited matched UI program data with other sources of claimant earnings information. For example, they all conducted computer matches of UI claimant files with state wage record files submitted by private employers. In addition, several states, including Florida, Texas, and Pennsylvania, conduct targeted matches to certain claimant groups. For example, Texas matches UI program data with private sector wage information on longshore workers and employees who work for large manufacturers who periodically initiate large layoffs. States also conduct quality assurance reviews to validate the continued eligibility of UI claimants generally. However, according to state quality assurance officials, such reviews are unable to identify the nonreporting of Reserve income unless claimants have already listed this type of employment on their initial applications or continued claims processing forms. State program officials told us that matches are an essential internal control mechanism for maintaining the financial integrity of their programs. Although most of the state officials we interviewed said that they have not conducted explicit cost-benefit analyses of their matching efforts, they viewed these efforts as an effective tool to deter future program abuse. However, UI officials from California reported that the revenue recovered from overpayments identified by their automated matches was greater than the cost of detecting and recovering these overpayments. Most program officials from other states told us that the costs of conducting automated matches themselves were fairly small, though the costs of actually recovering overpayments were much higher. Three of the seven states we visited—Colorado, Pennsylvania, and Texas—have conducted matches to identify the nonreporting of Reserve income received by UI claimants. Each of the states had initiated the matches after receiving reports of UI claimants not reporting their Reserve income. However, the UI programs’ lack of awareness of the availability of automated Reserve personnel records, difficulties in obtaining Reserve payroll records from DFAS, and limited assistance from Labor hampered the progress of each match. By June 1995, all three states had discontinued their matching efforts before they were completed. Two of the state programs we visited that matched UI program and Reserve personnel and payroll data indicated that they were unaware of the availability of automated information from DMDC that could have expedited their efforts. Although Colorado and Pennsylvania both determined that DMDC had records of Reserve personnel, neither state was aware it could obtain this information in automated form, which would have expedited the matching process considerably. For example, after obtaining printed Reserve personnel listings containing over 13,000 names, Colorado employed about six full-time staff to match those names with its UI claimants list. The manual matching continued for more than a year before the state closed the project. The Texas state UI program, after some delay, received automated personnel lists from DFAS through the state’s National Guard unit. However, because it did not believe that the state of Texas had authority to request such information, DFAS would not release the payroll information on Reserve personnel that the UI program needed for matching. Because of the lack of payroll information, Texas corresponded with the individual reservists receiving UI benefits and requested their voluntary reporting of Reserve income. In many cases, Texas sent several requests before claimants provided the appropriate wage information and because the Texas UI program was unable to verify the information, the accuracy of responses and the results of the match were questionable. State officials told us that they have also received little assistance from Labor’s ETA with their efforts to identify nonreported Reserve income. Texas officials did report that ETA regional officials successfully interceded with the Texas state National Guard in obtaining automated Reserve personnel records. However, while ETA hired a contractor to develop a technical assistance guide to help state benefit payment control units develop matching techniques, the contractor provided an inadequate description of how these units could identify the nonreported Reserve income. ETA officials, on the other hand, claim that they have not generally received any requests from states asking for their assistance in conducting matches; the agency has concentrated on providing assistance in other areas, such as compliance with federal requirements regarding the payment of UCFE claims. Officials from Pennsylvania and Texas have reported that they would not initiate future matches without access to automated Reserve personnel and payroll records and assistance from Labor. Because each state discontinued its matching efforts before completion, the amount of detected nonreported Reserve income and associated benefit overpayments was very incomplete. Nevertheless, each state identified overpayments. For example, state officials reported that Colorado identified over $280,000 in benefit overpayments on the basis of about 200 cases with nonreported Reserve income; Pennsylvania projected about $96,000 in overpayments for 339 cases; and Texas detected $124,000 for 416 cases. The Texas and Pennsylvania overpayment totals were derived from information self-reported by claimants, which likely understated benefit overpayments. Started in August 1993, Colorado’s matching effort initially included all Reserve components and was later narrowed to the Army and Air Force National Guard. Unaware of DMDC’s automated records, Colorado asked DMDC for a printed roster of Reserve personnel and conducted a manual match of the Reserve personnel roster with state UI records. DFAS failed to provide Colorado with payroll records on matched personnel, requiring data requests to the individual DFAS centers. The state manually matched each UI case file against the Reserve payroll record, a time-consuming, labor-intensive effort. Colorado identified overpayment cases and initiated recovery actions. The state stopped its matching effort in May 1995 after passing legislation to eliminate inclusion of Reserve income in the offset of UI benefits. Initiated in May 1994, Pennsylvania’s matching effort included all Reserve components. The state learned about the matching procedure from Colorado officials. Like Colorado, it requested a printed roster of Reserve personnel from DMDC instead of automated files. Pennsylvania conducted a time-consuming manual match of the Reserve personnel roster to state UI records. It did not attempt to obtain payroll records from DFAS. As of May 1995, only one of its eight regional offices had completed its time-consuming investigations and the state suspended the initiative in June 1995. Initiated in February 1990, Texas’ matching effort included the Air Force and Army National Guard. Texas asked the state National Guard to coordinate personnel information collection from DMDC, which did so. Labor’s ETA facilitated procurement of automated Reserve personnel records for the state’s Air Force and Army National Guard. Texas conducted automated matches of its UI claimant files with Reserve personnel records. DFAS was unwilling to provide Texas with automated or printed payroll records for matched files. Consequently, Texas relied on personal UI claimant responses for the verification of Reserve income. It has not compiled complete results and has no plans to do so or to conduct future matches. Although all state program officials identified better monitoring and matching of claimants’ earnings as a solution, such efforts have been seriously hindered, they told us, by a lack of automated payroll and personnel information on reservists who receive UI benefits. In addition to those named above, the following individuals made important contributions to this report: J. William Hansbury, Steven R. Machlin, Lori Rectanus, and Carol L. Patey. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO determined the amount of unemployment insurance (UI) paid to military reservists, focusing on: (1) why UI claimants do not report reserve income; (2) the administrative and legislative options available to prevent future trust fund losses; and (3) how these options will affect reservists' retention rates. GAO found that: (1) active UI claimants did not report more than $7 million in reserve income for fiscal year 1994; (2) the average amount of nonreported income varied from $273 to $959 per claimant, and resulted in UI benefit overpayments of $3.6 million; (3) most UI benefit overpayments went to Army Reserve personnel; (4) federal trust fund losses from the Unemployment Compensation for Ex-Servicemen Program totalled $1.2 million; (5) the UI system paid over $25 billion in benefits and received over $26 billion in state and federal unemployment tax revenues; (6) the integrity of the UI system is adversely affected by improperly paid benefits; (7) these overpayments hinder the UI system's ability to provide unemployment benefits, contribute to high state employer payroll taxes and federal outlays, and lower claimants' benefit levels; (8) UI claimants do not report their reserve income because they do not understand the reporting requirements, receive improper information regarding their reporting responsibilities, and have incentives not to report reserve income; (9) claimants are rarely penalized for not reporting their reserve income; (10) states can withhold a portion of a reservists' future benefits until applicable overpayments are repaid; (11) it is difficult to verify reservists' benefit levels without online access to federal wage data; and (12) nonreporting of reserve wage income will not affect the military's retention rates. |
Historically, most farm programs have been implemented at the county office level. The current county-based delivery structure originated in the 1930s, when the first agricultural acts established farm support programs. At that time, more than one-fourth of all Americans engaged in farming, and the lack of an extensive communication and transportation network limited the geographic boundaries that could be effectively served by a single field office. In addition, most farm programs required farmers to visit the local office to learn about and sign up for these programs. FSA staff assisted farmers in completing the administrative requirements, including the necessary paperwork, associated with the programs. Over the last 60 years, the number of farms in the United States has declined significantly, as has the number of people engaged in farming. Improvements in communication and transportation in rural areas have mitigated some of the problems associated with large distances between farmers and program resources. Additionally, two recent legislative changes have significantly affected USDA’s delivery of farm programs. The Federal Crop Insurance Reform and Department of Agriculture Reorganization Act of 1994 (P.L. 103-354, Oct. 13, 1994) directed the Secretary of Agriculture to streamline and reorganize USDA to achieve greater efficiency, effectiveness, and economies in its organization and management of programs and activities. In addition, the Federal Agriculture Improvement and Reform Act of 1996 (P.L. 104-127, Apr. 4, 1996) fundamentally changed the federal government’s role in supporting production agriculture by replacing traditional commodity programs and reducing many of the administrative requirements related to the remaining agriculture programs. Prior to the 1996 act, farmers participating in federal commodity programs were restricted to planting certain types and amounts of crops. Following the 1996 act, farmers are expected to plant and market crops by considering market conditions rather than by relying on government programs. As a result of the 1994 act, USDA has closed more than 300 offices, or about 14 percent of the 2,773 offices that were operating at the end of 1994. These closures required the farmers served by those offices to travel to a neighboring county for assistance. In addition to these office closings, USDA reduced FSA’s nonfederal staff from 13,432 in 1995 to 11,399 in 1997, a reduction of 2,033 employees, or about 15 percent. According to the 1998 budget proposal, USDA is scheduled to close 500 additional offices and reduce FSA’s county office staff by an additional 57 percent, from 11,399 employees in 1997 to 4,879 by 2002. The proposal’s estimated savings would total more than $1 billion for the 6 years through 2002. To date, USDA’s reductions in county office staff have been achieved primarily by reducing the staff at larger county offices and by closing or consolidating smaller county offices (those with three or fewer employees). Furthermore, USDA is undertaking an effort to streamline its administrative activities at the state and national level, which may affect the quality of service farmers receive. In December 1997, the Secretary of Agriculture approved a plan that will consolidate a number of administrative activities at headquarters and in state offices. The plan establishes a Support Services Bureau in headquarters and one state administrative support unit in each state. This organization will provide administrative services—including financial management, human resources, services supporting civil rights, information technology, and management services (including procurement)—to field-based agencies. USDA also has contracted for an independent study to examine FSA, the Natural Resources Conservation Service, and the Rural Development mission area for opportunities to improve overall customer service and the efficiency of the delivery system. The results of this study, expected to be completed in October 1998, will be incorporated into the future iterations of FSA’s strategic plan. Despite recent office closings and staff reductions, most farmers continue to be very satisfied with the quality of service they have been receiving from USDA, according to a USDA survey and our discussions with farmers. In USDA’s 1997 national survey, 90 percent of the more than 4,000 respondents said that they were very satisfied with the service they received from their county office and that local staff were responsive to their needs, provided reliable service, and showed empathy towards customers when conducting business. In addition, the participants said that “personalized face-to-face service” was important to them. In fact, when asked to identify alternative ways of doing business with the county office, such as by computer or telephone, nearly 60 percent of the farmers said that they did not want any changes and preferred to continue to conduct most business in person. According to all 60 farmers we spoke with by telephone, the quality of service in late 1997 was the same or better than it was in 1995, despite staff reductions and office closures. These farmers lived in all parts of the nation and had participated in the Conservation Reserve Program, the farm loan programs, and/or the commodity programs. In some cases, these farmers lived in counties in which their local county office had been closed. They stated that the quality of service was high because FSA staff were efficient and knowledgeable. One farmer said that service in the county office was good because the county office employees took the time to become familiar with each farmer’s operation. Farmers we spoke with were particularly pleased with FSA staff’s performance in the following areas: Completing paperwork. FSA staff have historically completed most farmers’ paperwork for the commodity programs for them. FSA staff told us that by completing the paperwork, they reduce the possibility of errors that would occur if farmers completed the paperwork on their own. Many farmers we talked to said that they like having FSA staff fill out their paperwork because it is very complex and they would have difficulty doing it by themselves. Storing and maintaining records. FSA staff maintain farmers’ commodity program records because, according to one FSA county executive director, many farmers like FSA to keep their historical farming records, such as acreage reports, on file in case farm programs change and the information is needed to establish eligibility for the new programs. Reminding farmers about key sign-up dates. FSA uses mail and telephone calls to remind farmers of key dates for enrolling in a program because officials are concerned that some farmers may otherwise forget to sign up. One farmer said that he appreciated receiving postcards from his county office when it was time for him to visit the office. Under the commodity programs, for example, FSA staff reminded farmers 15 days prior to the ending date of a sign-up period that they had not enrolled in the current year’s programs. Providing prompt walk-in service. At most county offices, farmers can visit without an appointment and receive prompt service for commodity programs. This service could range from answering simple questions to filling out a farmer’s paperwork. Farmers like the flexibility of coming into the office when it is convenient for them—when the weather is bad, for instance, without having to make an appointment. In commenting on a draft of this report, FSA officials noted that while the results of USDA’s survey and our discussions with farmers indicate that most farmers are satisfied with the service that they receive, some are not. For example, some small and minority farmers involved in the farm loan programs have criticized USDA recently for not providing adequate service. FSA officials stated that they would like to provide a better level of service for participants in the farm loan programs, but they lack adequately trained staff. As of December 1997, FSA had 2,396 offices and 11,399 county office employees. These office and staffing levels reflect the closing of more than 300 offices and staff reductions of about 15 percent since December 1994. If the 1998 budget proposal to further reduce staffing by an additional 50 percent and to close an additional 500 offices were carried out, FSA would average about two to three employees per office, in comparison with the current average of about five. As we have previously reported, county offices need a minimum of two staff just to conduct the administrative functions for maintaining basic office operations, such as obtaining and managing office space and processing the paperwork for the payroll. As a result, FSA staff in these smaller offices will have less time to provide service to farmers than they did when county offices were staffed more fully. The proposed staffing reductions will result in more county office closures than the 500 proposed, according to FSA officials we interviewed. As FSA closes offices, farmers will have to travel farther and visit offices that serve more farmers. Although they stated that they are still receiving quality service, some farmers we spoke with whose county office had recently closed have already experienced the service impacts associated with these changes. For example, according to one farmer—whose current county office is 45 miles away compared with his former office, which was 10 miles away—the staff at the new office did not have personal knowledge of his specific operations, such as the crops he grows, the farming techniques he uses, and the programs in which he normally participates. FSA officials recognize that additional staff reductions and office closings will reduce the level of personalized service to farmers and require them to accept greater responsibility for program requirements, including completing paperwork. At the same time, officials recognize that the 1996 act places more responsibility on farmers for planting and marketing decisions. In this regard, FSA officials told us that they are beginning to talk with farmers and the various groups involved in farming about the types of services FSA should provide in the future. We met with USDA officials, including the Associate Administrator for the Farm Service Agency, the Deputy Administrator for Farm Programs, and the Deputy Administrator for Farm Loan Programs. USDA generally agreed with the information presented in the report. In their comments, however, the officials noted that the services provided to farmers vary among the USDA programs. For example, Farm Service Agency officials stated that because the staff for the farm loan programs are not located in each county, these staff are not able to provide the same level of service that farmers participating in the traditional commodity programs received, such as having their paperwork filled out for them. Furthermore, these officials stated that some small and minority farmers have recently criticized USDA for not providing adequate service. We made changes to the report to reflect these concerns. In addition, USDA provided technical and clarifying comments that we incorporated as appropriate. To determine farmers’ opinions of the quality of service FSA provides in county offices, we reviewed selected aspects of the results of USDA’s National Customer Service Survey of farmers in 1997. Specifically, we analyzed and summarized responses on (1) the services that matter the most to farmers and (2) farmers’ general satisfaction with services provided by USDA’s service centers. This survey included over 4,000 farmers nationwide who participated in various farm programs. To verify and update these results, we obtained a database from USDA of the names, location, and phone numbers of farmers who had previously completed a USDA customer service survey. We judgmentally selected 90 farmers who had participated in the Conservation Reserve Program, the farm loan programs, and/or the Acreage Reduction Program in 1995. We were able to contact 60 of these farmers across the nation by telephone to obtain information on the quality of service in FSA county offices in 1997 compared with the quality of service in 1995. Some of these farmers lived in counties in which the local county office had been closed. We also visited FSA officials at headquarters and FSA state and county office officials in eight states to discuss the quality of service farmers currently receive. The offices we visited were located in California, Connecticut, Illinois, Massachusetts, Missouri, Nebraska, North Carolina, and Washington State. In most of these county offices, we met with the county executive director, agricultural credit manager, and farmers from the FSA county committee. We also met with the state executive director in six states and members of the state committee in two states. We conducted our work from October 1997 through April 1998 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after the date of this letter. At that time, we will provide copies to the House and Senate Committees on Agriculture; other interested congressional committees; the Secretary of Agriculture; and the Director of the Office of Management and Budget. We will also make copies available to others on request. Please call me at (202) 512-5138 if you or your staff have any questions about this report. Major contributors to this report were Ronald E. Maxon, Jr.; Fred Light; Renee D. McGhee-Lenart; Paul Pansini; Carol Herrnstadt Shulman; and Janice M. Turner. Robert A. Robinson Director, Food and Agriculture Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the impact of actual and proposed staff reductions and office closings by the Farm Service Agency (FSA) on the quality of service to farmers. GAO noted that: (1) FSA's staff reductions and office closures to date do not appear to have affected the quality of service provided to farmers; (2) according to the Department of Agriculture's 1997 customer survey and GAO's recent discussions with farmers and FSA officials, most farmers are highly satisfied with the service they receive from their local office of FSA; (3) farmers are still generally able to receive prompt service when they walk into their county office and have FSA staff complete most of their required paperwork; (4) if FSA's staffing continues to be reduced and county offices are closed, however, the traditional level of service provided to farmers is likely to decrease; and (5) among other things, farmers will be required to accept greater responsibility for program requirements, including completing paperwork; with less assistance from agency staff, however, this change is consistent with changes in the 1996 Federal Agriculture Improvement and Reform Act, which reduces federal controls over production and places more responsibility on farmers for planting and marketing decisions. |
Most individuals diagnosed with ESRD are eligible to receive Medicare benefits under both Medicare Parts A and B. Medicare covers over 80 percent of all individuals with the disease. ESRD treatment options include kidney transplantation and maintenance dialysis. The latter removes substances that would otherwise be filtered through the kidney from the individual’s blood. Kidney transplants are not a practical option on a wide scale, as not all patients are candidates for transplant and suitable donated organs are scarce. In contrast, dialysis is the treatment used by most ESRD patients. Dialysis can be administered through two methods: hemodialysis and peritoneal dialysis. During hemodialysis, a machine pumps blood through an artificial kidney, called a hemodialyzer, and returns the cleansed blood to the body. Hemodialysis, the most prevalent treatment method, is generally administered at freestanding facilities that provide dialysis services. The conventional regimen includes hemodialysis three times a week. Peritoneal dialysis—which is generally done in the home—utilizes the peritoneal membrane, which surrounds the patient’s abdomen, as a natural blood filter. Patients remove wastes and excess fluids from their abdomen manually throughout the day, or a machine automates the process while they sleep at night. This procedure eliminates the need for the blood to leave the body of the patient and filter through a machine. The use of peritoneal dialysis has declined as a treatment modality over the last decade. One of the complications of ESRD is anemia, a condition in which an insufficient number of red blood cells is available to carry oxygen throughout the body. In ESRD patients, this condition is treated by maintaining at an optimal level the percentage of red blood cells relative to all cells in whole blood (by volume). This measure is known as the hematocrit (Hct) level. The Kidney Disease Outcomes Quality Initiative (KDOQI), established by the National Kidney Foundation, has set the minimum target for ESRD patients’ Hct levels at 33 percent and has found insufficient evidence to recommend routinely maintaining Hct levels at 39 percent or greater. ESRD patients receive Epogen to keep their Hct above a minimum level. The Food and Drug Administration (FDA) labeled Epogen for use encompassing a somewhat lower Hct target level ranging from 30 to 36 percent. Recent clinical studies cited by KDOQI indicate that there may be increased patient mortality and morbidity if Hct levels are much higher than 39 percent. Epogen is typically administered to Medicare ESRD patients intravenously. Epogen can also be administered subcutaneously, that is, through an injection under the skin. The subcutaneous method requires less epoetin, but experts note that, because some pain is associated with this method, patients generally prefer intravenous delivery. Medicare’s composite rate is designed to cover the cost of services associated with a single dialysis treatment, including nursing and other clinical services, social services, supplies, equipment, and certain laboratory tests and drugs. Under the composite rate, facilities receive a fixed payment, regardless of their actual costs to deliver these services. In 2006, the composite base rate is about $130 for freestanding dialysis facilities. Medicare pays separately for certain drugs and laboratory tests that have become routine treatments since 1983. These drugs include, but are not limited to, epoetin (brand name, Epogen), injectable vitamin D, and injectable iron. Epogen is generally administered to most patients at every dialysis treatment, whereas the other drugs, although routinely provided, are not administered as frequently. Table 1 highlights three separately billable prescription drugs provided routinely to dialysis patients. As table 1 shows, three drugs—iron sucrose, paricalcitol, and epoetin alfa—account for about 87 percent of Medicare spending on separately billable ESRD drugs. Although each of these three drugs is a “sole-source” product—that is, produced by a single manufacturer—two of the three have pharmaceutical alternatives available, whereas the third, epoetin, has no available alternatives in the ESRD market. In recent years, Medicare’s method of paying for separately billable ESRD drugs has changed several times. Beginning in 1998, Medicare law required that payment for drugs covered under Part B equal 95 percent of the drug’s average wholesale price (AWP). Despite its name, however, AWP was neither an average price nor the price wholesalers charged. It was a price that manufacturers derived using their own criteria; there were no requirements or conventions that AWP reflect the price of an actual sale of drugs by a manufacturer. An analysis we conducted in 2001 on Part B drug prices found that Medicare’s AWP-based payments often far exceeded market prices that were widely available to health care providers. The MMA mandated that in 2005 Medicare pay for separately billable ESRD drugs based on their acquisition costs, as determined by the HHS Office of the Inspector General (OIG). Since acquisition costs were not defined in the MMA, the OIG determined a drug’s average acquisition cost based on a survey of prices providers paid for the top 10 ESRD drugs, ranked by Medicare expenditures. For 2005, Medicare paid the OIG- determined average acquisition cost for the top 10 ESRD drugs. For 2006, the MMA gave the HHS Secretary discretion to alter the basis of payment for separately billable ESRD drugs. Under this authority, CMS determined that Medicare would pay for the separately billable ESRD drugs using the method required by the MMA to pay physicians for these drugs—that is, 106 percent of the drug’s ASP. CMS instructs pharmaceutical manufacturers to report data to CMS on the ASP for each Part B drug sold by the manufacturer, within 30 days after the end of the quarter. For drugs sold at different strengths and package sizes, manufacturers are required to report price and volume data for each product, after accounting for price concessions. CMS then aggregates the manufacturer-reported ASPs to calculate a national ASP for each drug category. ASP rates are calculated and posted every quarter. The rates reflect the sales price on average from 6 months earlier. Since 2003, several legislative and regulatory changes have been implemented affecting Medicare’s composite rate for routine ESRD services and payment rates for separately billable ESRD drugs. The changes have increased the composite rate and reduced the subsidy facilities obtained from generous Medicare payments for the separately billable drugs under pre-MMA payment rates. Nevertheless, as long as facilities receive a separate payment for each administration of each drug and the payment exceeds the cost of acquiring the drug, an incentive remains to use more of these drugs than necessary. For Epogen, the most frequently used drug, several months of data indicate that the per-patient use of this drug continues to rise, although at a slower rate than under pre- MMA payment rates. The MMA initiated new Medicare payment provisions addressing the composite rate and payment for separately billable drugs. Prior to the MMA’s payment changes, facilities relied on payments for separately billable drugs to subsidize the cost of providing dialysis services covered under the composite rate. In a 2004 report, we found that, in 2001, Medicare’s payment for the composite rate was 11 percent lower on average than facilities’ average costs to provide the items and services included in the composite rate, whereas Medicare’s payment for separately billable drugs was 16 percent higher than facilities’ average costs of acquiring these drugs. We concluded that this payment disparity created an incentive for facilities to overuse separately billable drugs, as payments for them compensated for losses on items and services included in the composite rate. Together with the MMA provisions, more recent legislative and regulatory changes have reduced the disparity between Medicare’s payments and facilities’ average costs for both composite rate services and separately billable drugs. Essentially, these changes lowered payments for separately billable drugs from their pre-MMA amounts, and raised payments for the composite rate. The base composite rate was increased by 1.6 percent in 2005 and 2006 and the composite rate total was further increased through a “drug add-on” payment, which shifted some of the payments for separately billable drugs to the composite rate. In 2005, the add-on equaled 8.7 percent of the updated composite rate. In 2006, the 8.7 percent was replaced with a drug add-on payment of 14.5 percent of the 2006 updated composite rate. (See table 2.) The most significant changes to the ESRD payment system are the changes in payment rates for separately billable drugs. In 2005, Medicare’s payment rates based on average acquisition costs were lower than its previous payment rates based on 95 percent of AWP. For example, from 2004 to 2005, the per-unit rate for iron dextran decreased from $17.91 to $10.94 and the per-unit rate for paricalcitol decreased from $5.33 to $4.00. (See table 3.) Since 2006, when the payment method for separately billable drugs changed to ASP + 6 percent, Medicare’s payment rates have varied from quarter to quarter but have remained relatively consistent with the lower 2005 payments based on average acquisition costs. Since the implementation of these changes, Medicare spending for individual separately billable ESRD drugs has decreased to varying degrees. Beginning in 2005, when Medicare’s payment method for these drugs changed from AWP to average acquisition cost, Medicare expenditures for several separately billable drugs decreased 11.8 percent from 2004. (See table 4.) Specifically, the average payments for iron sucrose and paricalcitol decreased by almost 35 percent and 25 percent, respectively. Similarly, payment for Epogen was lower than it had been for the previous decade, when it was set statutorily at $10 per unit, but the reduction—3.2 percent—was significantly less compared with the other drugs. Because payments to facilities for separately billable drugs are closer to the cost of acquiring these drugs and because composite rate payments have increased, the degree of cross-subsidization to support services provided under the composite rate has diminished, but the incentive to overuse these drugs has not been eliminated. To the extent that facilities can obtain the drugs for less than Medicare’s payment rates and that the volume of drugs billed for separately increases facilities’ revenue, an incentive remains for facilities to overuse these drugs to maximize revenues. to increase the validity of our he year We restricted the data to the first half of t comparison of previous years to 2006, for which we have only partial data. ote: Data are per ESRD patient with at least one Epogen claim in the first 6 months. We restricted N the utilization data to the first half of the year to make our comparisons consistent with 2006 data, which we only have for the first 6 months of the year. In addition to payment changes, CMS has sought over time to limit expenditures for Epogen by issuing policies that link payment to utilization. That is, Medicare reduces payments when a patient’s Hct level reaches a certain percentage. Since 1997, CMS has created three different monitoring policies to encourage the efficient use of Epogen for ESRD patients. Each of these policies has been closely aligned with the clinical guidelines for Hct levels endorsed by the National Kidney Foundation. In 1997, the first policy denied payment when a patient’s 3-month rolling average Hct level exceeded 36.5 percent. In 1998, CMS revised the policy so that the maximum level for the 3-month rolling average Hct was 37.5 percent; if a patient exceeded that level, payments were not denied as long as the Epogen dose was reduced 20 percent. In July 2004, CMS issued a proposal for a new monitoring policy. After consultation with the dialysis community, the final policy took effect on April 1, 2006. Under this policy, when a patient’s Hct level is above 39.0 percent, the facility must reduce the Epogen dosage by 25 percent of the preceding month’s administered amount.43 44 Whether or not the facility reduces the dosage, Medicare pays the facility as though the reduction has occurred—in effect, not rewarding the facility for overutilization. In broad terms, Medicare’s policy is to set payment rates that are adequate to ensure beneficiary access to services but do not exceed the costs efficient providers incur to furnish needed care. In prior work on Medicare payment for Part B drugs, which include separately billable ESRD drugs, we noted that the ASP method was practical for setting payment rates compared with Medicare’s previous methods to pay for these drugs, but we remained concerned about the appropriateness of the rates set under ASP. The practical aspects of ASP are several: it is based on actual transactions and is a better proxy for providers’ acquisition costs than Medicare’s previous methods to pay for these drugs; ASP is the most recent publicly available price information, as it is updated quarterly, and is therefore timely for rate-setting purposes; and price data from manufacturers are administratively easier for CMS to collect than obtaining such data from health care providers. Medicare has a process under which facilities can appeal the denial of a claim by showing that it is medically necessary. 42 U.S.C. § 1395ff (2000). Effective October 2006, CMS revised the monitoring policy to, among other things, clarify its policy for reporting dosage reductions. setting the payment rate for Medicare Part B drugs at 6 percent above AS further complicating efforts to determine the appropriateness of the rate. The ASP payment method is of particular concern with respect to Epogen because it is the only product available in the ESRD market for anemia management. The ASP method relies on market forces to achieve a favorable payment rate for Medicare—that is, one that is sufficient to maintain beneficiary access but not overly generous for providers and therefore wasteful for taxpayers. In principle, under ASP, when two or more clinically similar products exist in a market, market forces could serve to bring prices down, as each manufacturer competes for its own product’s market share. In contrast, when a product is available through only one manufacturer, Medicare’s rate lacks the moderating influence of competition. For this reason, Medicare’s ASP method may not be appropriate for Epogen, which is the product of a single manufacturer has no competitor products in the ESRD market. The lack of price competition may be financially insignificant for noncompetitive products that are rarely used, but for Epogen, which is pervasively and frequently used, the lack of price competition could be having a considerable effect on Medicare spending. Since the introduction of Epogen in the ESRD anemia management market, it has been difficult for competitor products to enter this market. Amgen, Epogen’s manufacturer, has held seven patents on Epogen, the first of which was granted in 1987 and the last of which expires in 2015; Amgen has obtained injunctions against pharmaceutical firms seeking to market their anemia management drugs in the United States. However, competitor products may enter the U.S. market in the near future. There are three potential sources of future competition: a drug that currently exists, drugs that are likely to enter the market soon, and products that are under development. Aranesp is a drug that Amgen manufactures and markets to hospitals and physicians to treat anemia in patients with cancer and chronic kidney disease but generally does not market to ESRD facilities. CERA is a drug that the manufacturer—F. Hoffmann LaRoche— hopes to introduce in the United States sometime in 2007. Certain products currently in development, which are several years away from entering the market, could have a distinct advantage over injectable products, as they are expected to be long-lasting oral therapies. The composite rate for routine dialysis-related services was the first of Medicare’s several payment systems that, in broad terms, sets a fixed, prospective rate for a set of clinically related services. Consistent with payment policy, the Congress has required CMS to develop a system tha would no longer pay for each injectable ESRD drug under a separate rat but would bundle payment for these drugs together with other ESRD services under a single rate. A bundled rate would have advantages for achieving efficiency and greater clinical flexibility. CMS’s design of a bundled rate is under way but behind schedule, making the implementation of a fully bundled payment system, based on this design, at least several years away. Any payment system changes based on report or demonstration would require legislation. Medicare’s approach to paying for most services provided by facilities is to pay for a group—or bundle—of services using a prospectively set rat e. For example, under prospective payment systems, Medicare makes bundled payments for services provided by acute care hospitals, skilled nursing facilities, home health agencies, and inpatient rehabilitation facilities. In creating one payment bundle for a group of associated items and services provided during an episode of care, Medicare encourages providers to operate efficiently, as providers retain the difference if Medicare’s payment exceeds the costs they incur to provide the services. Medicare’s composite rate for routine dialysis-related services was introduced in 19 and was the program’s first bundled rate. MedPAC), and CMS have recommended expanding the bundled payment In recent years, we, the Medicare Payment Advisory Commission ( for ESRD services to include not only the services paid under the composite rate but also the drugs that facilities currently bill for separately. Experts contend that a bundled payment for dialysis-related services would have two principal advantages. First, it would encourage facilities to provide services efficiently; in particular, under a fixed, bundled rate for a defined episode of care, facilities would no longer an incentive to provide more ESRD drugs than clinically necessary. Second, bundled payments would afford clinicians more flexibility in drug or decision making because incentives to prescribe a particular treatment are reduced. For example, certain clinical alternatives are, according to some ESRD experts, advantageous to patients and could result in the use of less Epogen, but these alternatives are not encouraged under the current payment system. Studies have shown that daily hemodialysis—which some experts contend is clinically preferable—reduced the need for Epogen in some ESRD patients with anemia. However, Medicare coverage is limited to three dialysis treatments a week. Under a bundled payment, facilities would have the flexibility to increase the number of weekly dialysis treatments and reduce their use of Epogen. Studies have also shown that patients who receive subcutaneous instead of intravenous injections of epoetin and patients undergoing peritoneal dialysis instead of hemodialysis need less epoetin to manage their anemia. Under t current payment system, which pays facilities for epoetin on a per administration basis, facilities have an incentive to select the epoe delivery method and the dialysis modality that maximize their Metin revenue. Under a bundled payment, facilities would have less incentive tochoose the costlier intravenous over subcutaneous injections of epoetin o r the costlier hemodialysis over peritoneal dialysis. Facility representatives, ESRD experts, and other interested parties w spoke with generally supported a bundled payment for dialysis-related items and services while underscoring the importance of certain elem as part of the bundled payment system. First, facility representatives noted that bundled payments called for a case-mix adjuster—that is, a mechanism to account for the differences in the mix of more expensive and less expensive patients across facilities. Without accounting for t differences, facilities that treated a disproportionate share of costly patients would be financially disadvantaged. Second, some facility representatives noted that an automatic paym update would be needed to adjust the bundled rate for inflation, consisten with Medicare’s other bundled payment systems that are updated automatically on an annual basis. They pointed out that the current ESRD t composite rate is Medicare’s only payment bundle that does not receive a automatic update. Third, ESRD experts we spoke with noted that, under bundling, the incentive to overuse services is blunted, but the incentive to underu se services is present. For example, facilities could choose to provide too little Epogen to patients with anemia because they would save money providing less of this costly drug. These individuals commented that CMS’s monitoring policy, which currently focuses on overutilization of Epogen, would need to refocus its attention on underutiliza under a bundled payment system ESRD patients received appropriate levels of Epogen and other dialysis-related drugs and services. The MMA mandated a two-pronged approach for CMS to study the creation of a bundled payment method. It required CMS to submi to the Congress on a bundled payment system design in October 2005 an start a 3-year bundling demonstration in January 2006. The legislation he linked the two requirements by directing CMS to base the design of t bundling demonstration on the content of the required CMS to obtain input on the demonstration’s design and implementation from an advisory panel that included industry and government experts. The report had not been issued nor had the demonstration been launched as of November 2006. Any payment s changes based on CMS’s report or demonstration would require legislation. mandated report. It also The report and demonstration efforts, led by two different organizational units in CMS, face similar design considerations. Both must define the ESRD services to be included in a payment bundle, design a case-mix adjustment model to account for differences in patients’ use of resources, and develop a payment policy for exceptional cases, known as an outlier policy. However, despite similar goals, each unit has a different focus. Essentially, the unit responsible for the report is designing a bundled payment system that is intended to be implemented programwide and expeditiously, following congressional approval. In contrast, the unit responsible for the demonstration is designing a bundled payment sy that is intended to be implemented on a limited and self-selective basis that is, through facilities’ voluntary participation in the demonstration. The time frame for implementing a bundled payment system based on CMS’s report is uncertain. Officials could not tell us when the report would be available. Furthermore, additional time is needed for the Congress to review the report and possibly pass legislation based on th report. CMS officials predict that it would take a minimum of 18 fully implement the system, once legislation had been enacted. The start of the bundled payment demonstration is similarly subject uncertain chain of events. Specifically, under MMA, CMS cannot launch its demonstration before considering the information in its mandated report. The rationale for Medicare to continue paying for Epogen and other E drugs outside of a payment bundle has diminished over time. Composite rate updates and add-ons, coupled with the overhaul of payment for P drugs, have moved Medicare toward paying more appropriately for ESRD services. Nevertheless, under the ASP payment method—which pays for separately billable ESRD drugs on a per administration basis—facilities continue to have an incentive to use these drugs more than may be necessary. Paying for Epogen under ASP presents an additional dilemm as a single-source drug in a market with no competitor products, Epo n is not subject to the moderating effects that competition can have o price. In our view, Medicare could realize greater system efficiency if a services, including drugs, were bundled under a single payment. A bundled payment—suitably adjusted for differences across facilities in their mix of patients—would encourage facilities to use drugs more prudently, as they would have no financial incentive to use more than necessary and could retain the difference between Medicare’s payment and their costs. At the same time, because treatment choices would be payment neutral, clinicians would have more flexibility to try different treatment combinations of items and services paid for in the bundle. To account for facilities’ increased or decreased costs over time, a reexamination of the bundled rate may be necessary periodically. In the case of Epogen, for example, if other competitor products entered the market in the future, the costs facilities would incur to treat anemia could decline. By adjusting the payment bundle accordingly, Medicare could realize the benefits of such cost reductions. CMS’s time line is considerably protracted for issuing the mandate on a bundled ESRD payment system and conducting a demonst remains under development. The time needed to complete these steps makes the prospect of implementing such a system several years away. In light of the uncertain time frame for CMS’s test of bundling and the potential for bundling to eliminate financial incentives to overuse separately billable drugs, the Congress should consider establishing a bundled payment system for all ESRD services as soon as possible. We invited representatives of drug manufacturers, large and small dialysis facility organizations, and a nephrologist specialty association to review and comment on the draft report. The groups represented were Amgen Inc. (Amgen), F. Hoffmann-La Roche Ltd. (Roche), the Kidney Care Council (KCC), the National Renal Administrators Association (NRAA), and the Renal Physicians Association (RPA). Several of the industry groups noted that the report was well written, thorough, and covered many of the issues affecting dialysis providers. The bulk of the groups’ comments focused on three general issues central to the message of our report: the increase in utilization of Epogen over time, the current ASP- based payment system for ESRD drugs, and the implementation of a fully bundled ESRD payment system. First, Amgen, KCC, Roche, and NRAA noted that the report did not ful ly explain why utilization of Epogen has grown over time or why the gro wth rate has slowed in recent years. Amgen stated that the draft report did not sufficiently cover the goal of Epogen therapy—which is to increase patient Hct levels—and its link to improved quality of life for dialysis patients. KCC noted that while the average Epogen dose has increased over time, patient outcomes—as measured by average Hct levels—have also improved. KCC further contended that beca remained relatively flat in recent years, providers are not responding to the incentive to overuse ESRD drugs. Roche maintained that the slow growth in Epogen use over the past few years is attributable to more patients’ having achieved Hct levels within the target range. NRAA addedthat the slower growth of Epogen use is positive because it demonstrates that providers use less Epogen as more patients reach the target Hct range. In our report, we discuss the utilization of Epogen rather than the clinica outcomes associated with that utilization. In response to the groups’ comments, we have added information that describes the benefits of Epogen therapy as well as data on patient Hct levels prior to the MMA payment changes. Although we do not take a position on whether the drug is overutilized at the levels we report, we stand by our contention that an inherent incentive to maximize revenues exists when items are paid for on a cost-plus (e.g., ASP+6 percent), fee-for-service basis. It is because of the inherent nature of this incentive that we recommend combining payment for ESRD drugs with all dialysis services under a single bundled rate. l Second, all of the groups commented on our discussion in the report of the current ASP-based payment method for separately billable ESRD drugs, with some groups expressing concerns about an abrupt movement to a fully bundled rate. Amgen noted that the ASP method is relatively new and that it is too early to decide whether to move to a fully bundled rate. In addition, Amgen was concerned with our characterization of ASP payment issues associated with Epogen and stated that the entry of a new anemia management product may not necessarily result in reduced prices. Two of the organizations noted that, prior to moving to a bundled rate, a transitional system—one that encourages price competition for anemia management drugs—may be desirable. Roche stated that continuing to use the ASP-based payment system for Epogen could have negative downstream effects on a fully bundled ESRD payment system, as any price increases prior to bundling would be captured in the dollar amounts allocated for anemia management drugs included in the bundle. Similarly, KCC stated that an alternative payment system should be explored prior to bundling. KCC also stated that, as long as there is no viable clinical alternative to Epogen, bundling by itself would not provide for clinical flexibility, nor would bundling alone ensure drug price stability. KCC suggested that a transitional system could involve paying for drugs at ASP and transferring the rate’s current 6 percent add-on to the composite rate. In general, both RPA and NRAA viewed the ASP-based payment method for ESRD drugs favorably. RPA specifically referred to the recent legislative and regulatory actions, including the move to an ASP-based rate, as “responsible,” because payments for separately billable drugs were lowered while the composite rate was increased. Our dis separately billable drugs in general and on Epogen in particular because of its market domination and the high Medicare expenditures associated withcussion of the ASP-based payment method focuses on payment for it. We agree that the introduction of a competitor product may not res in immediate price reductions, but note that, in principle, competition tends to lower prices over time. Although we acknowledge that there m be a better way to pay for separately billable drugs than ASP+6 percent, our focus is on the need to mitigate the incentives that can undermine the efficient use of resources in ESRD care. Any transitional system that allows separate billing for individual drugs perpetuates the incentive to maximize revenues through utilization of these drugs. We agree that bundling by itself cannot solve problems resulting from the lack of pri competition. However, as noted in our draft report, if price competition w treatment costs for providers and—after adjustments to the bundle for these lower costs—could result in savings for Medicare. ere introduced under a bundled payment system, it could result in lower Finally, representatives from four of the groups expressed concerns about implementation challenges associated with a payment bundle. Consistent with CMS’s position and the position of experts cited in our draft report, Amgen and KCC emphasized the importance of appropriate case-mix adjustment in a bundled payment system. KCC underscored the considerable variation in patients’ need for Epogen and the role of the case-mix adjuster to ensure adequate compensation for providers treating patients needing unusually high levels of the drug. RPA, NRAA, and KCC were concerned that bundling could limit innovation in the ESRD or that physicians would be reluctant to use any new ESRD drugs that facilities would find to costly to cover within the payment bundle. Consistent with this concern, NRAA noted that the payment bund methodology should have a mechanism to ensure the appropriate incorporation of new technologies and treatment protocols. We agree that an appropriate case-mix adjuster is important to a bundled payment system and noted in the draft report that adjusting for diffe in patients’ needs was a key point made by interested parties we contacted. We acknowledge that if the payment bundle does not accou for patient differences, facilities that treat a disproportionate share of costly patients would be financially disadvantaged. We note that CMS has done extensive research on case-mix adjustment in a fully bundled ESRD these payment system and believe that any new system will benefit from efforts. We also agree that a new payment bundle should be periodicallyupdated to reflect the costs of current technologies and treatment protocols. Specific details on the contents of a bun and evaluation over time were beyond the scope of this report. dle, its implementation, As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this rep ort to the appropriate congressional committees and other interested parties. We will also make copies available to others upon request. This report wi be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7101 or steinwalda@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. A. Bruce Steinwald (202) 512-7101 or steinwalda@gao.gov. Phyllis Thorburn, Assistant Director; Jessica Farb; Hannah Fein; Zachary Gaumer; and Shivani Sharma made key contributions to this report. | Medicare covers dialysis--a process that removes excess fluids and toxins from the bloodstream--for most individuals with end-stage renal disease (ESRD), a condition of permanent kidney failure. The Centers for Medicare & Medicaid Services (CMS) pays for certain dialysis services under a type of bundled rate, called a composite rate, and, for certain dialysis-related drugs, pays a separate rate per dose each time the drug is administered. These drugs are referred to as "separately billable" and are paid at 6 percent above manufacturers' average sales price (ASP). Recently, the Congress required CMS to explore the creation of a bundled payment for all ESRD services, including separately billable drugs. GAO was asked to examine (1) recent changes in payments for ESRD services, (2) the ASP payment method of setting rates for separately billable ESRD drugs, and (3) CMS efforts to develop a bundled payment method that includes all ESRD drugs. GAO obtained information for this study from CMS, the U.S. Renal Data System, ESRD experts, and previously issued GAO reports. The effect of several legislative and regulatory changes since 2003 has been to raise the composite rate while reducing Medicare's pre-2005 generous payments for separately billable ESRD drugs. In 2005, when the first legislative change was implemented, Medicare expenditures for certain separately billable drugs dropped 11.8 percent. In 2006, Medicare regulation changed the payment for these drugs to a method based on ASP. Since then, Medicare's payment rates have varied from quarter to quarter but have remained relatively consistent with the lower 2005 payment rates. Medicare's cost containment efforts have targeted the most expensive of the separately billable drugs--Epogen--for which program spending totaled $2 billion in 2005. Epogen is used to treat anemia in ESRD patients; most patients receive this drug at nearly every dialysis session. Recent data indicate that Epogen use per patient continues to rise, although more slowly than in previous years. Several unknowns about the composition of ASP and the lack of empirical evidence for the percentage level added to ASP make it difficult for CMS to determine whether the ASP-based payment rates are no greater than necessary to achieve appropriate beneficiary access. Paying for Epogen under the ASP method is of particular concern. The ASP method relies on market forces to moderate manufacturers' prices; but Epogen is the product of a single manufacturer and has no competitor products in the ESRD market. Without competition, the power of market forces to moderate price is absent. For rarely used products, the lack of price competition may be financially insignificant, but for Epogen, which is pervasively and frequently used, the lack of price competition could be having a considerable effect on Medicare spending. In 2003, the Congress required CMS to issue a report and conduct a demonstration of a system that would bundle payment for ESRD services, including drugs that are currently billed separately, under a single rate. The bundled payment approach, used to pay for most Medicare services, encourages providers to operate efficiently, as they retain the difference if Medicare's payment exceeds the costs they incur to provide the services. GAO and others have found that a bundled rate for all ESRD services would have advantages for achieving efficiency and clinical flexibility in treating ESRD patients. CMS's demonstration testing the feasibility of a bundled rate, mandated to start in January 2006, is delayed, as is the completion of the agency's mandated report to the Congress on bundling. The report was due in October 2005; as of November 2006, CMS officials could not tell us when the report would be available. |
Construction of the Y-12 plant in Oak Ridge, Tennessee, began in 1943 as part of the World War II Manhattan Project. The plant’s early mission included the processing of enriched uranium necessary for building nuclear weapons. Today, the Y-12 plant continues its mission as NNSA’s primary facility in the nuclear weapons complex for producing enriched uranium components necessary for maintaining the U.S. nuclear weapons stockpile. In addition, the Y-12 plant is used for dismantling weapons components, storing and managing nuclear material suitable for nuclear weapons, and processing fuel for Naval and research reactors, among other things. Currently, the Y-12 plant consists of a patchwork of facilities and equipment that are not always efficiently connected, requiring the transport of materials during processing and component production operations. According to NNSA documents, the workflow is inefficient and requires a significant number of security personnel to patrol a relatively large protected area. Moreover, because of age and facility deterioration, operations and maintenance costs are continually rising with frequent outages and interruption in work schedules. According to NNSA officials, the existing facilities also do not meet a number of significant regulatory and design standards that are either in place or projected to be in the near future. For example, these facilities do not meet current standards for protection against natural occurrences or fire. Furthermore, existing Y-12 plant facilities do not provide optimal worker safety and protection from exposure to radioactive materials, including uranium, and other hazardous materials. Although these facilities have had periodic upgrades, the equipment, buildings, and support utilities need to be modernized for the Y-12 plant to continue to meet its mission, according to NNSA officials. NNSA plans to transfer much of the ongoing uranium processing work and uranium component production that is performed at existing facilities at the Y-12 plant to the UPF in order to continue to support the nation’s nuclear weapons stockpile and provide uranium fuel to the U. S. Navy, among other things. The proposed UPF is to consist of a single, consolidated uranium processing and component production facility to encompass less than half the size of the existing Y-12 plant facilities. NNSA officials expect that a combination of modern processing equipment and consolidated operations at the UPF will significantly reduce both the size and cost of enriched uranium processing at the Y-12 plant. Specifically, the officials said that the more-efficient layout of the new facility and more- modern equipment will significantly reduce processing and production costs, including costs associated with facility and equipment maintenance and maintaining worker and environmental health and safety. DOE Order 413.3A establishes a process for managing the department’s major projects—including contractor-run projects that build large complexes that often house unique equipment and technologies. The order covers activities from identification of need through project completion. Specifically, the order establishes five major milestones—or critical decision points—that span the life of a project. These critical decision points are: Critical Decision 0: Approve mission need. Critical Decision 1: Approve alternative selection and cost range. Critical Decision 2: Approve performance baseline. Critical Decision 3: Approve start of construction. Critical Decision 4: Approve start of operations or project completion. Order 413.3A specifies the requirements that must be met, along with the documentation necessary, to move a project past each milestone. In addition, the order requires that DOE senior management review the supporting documentation and approve the project at each milestone. DOE also provides suggested approaches for meeting the requirements contained in Order 413.3A through additional guidance. For years, DOE and NNSA have had difficulty managing their contractor- run projects. Despite repeated recommendations from us and others to improve project management, DOE and NNSA continue to struggle to keep their projects within their cost, scope, and schedule estimates. Because of DOE’s history of inadequate management and oversight of its contractors, we have included contract and project management in NNSA and DOE’s Office of Environmental Management on our list of government programs at high risk for fraud, waste, abuse, and mismanagement since 1990. In response to its continued presence on our high-risk list, DOE analyzed the root causes of its contract and project management problems in 2007 and identified several major findings. Specifically, DOE found that the department: often does not complete front-end planning to an appropriate level before establishing project performance baselines; does not objectively identify, assess, communicate, and manage risks through all phases of project planning and execution; fails to request and obtain full project funding; does not ensure that its project management requirements are consistently often awards contracts for projects prior to the development of an adequate independent government cost estimate. To address these issues and improve its project and contract management, DOE has prepared a corrective action plan with various corrective measures to track its progress. The measures DOE is implementing include making greater use of third-party reviews prior to project approval, establishing objective and uniform methods of managing project risks, better aligning cost estimates with anticipated budgets, and establishing a federal independent government cost-estimating capability. NNSA’s current cost estimates for constructing the UPF are already more than double its initial estimate. Moreover, the $200 million estimated annual savings in operations, maintenance, and security costs may not begin to be realized until the transition between existing uranium processing facilities at the Y-12 plant and the new UPF is complete. Although NNSA’s current estimate prepared in 2007 indicates that the UPF construction will be completed between 2018 and 2022, NNSA officials expect the UPF will not be completed before 2020 due to funding shortfalls. NNSA’s current estimate, which was prepared in 2007 at critical decision 1, indicates that the UPF will cost between $1.4 and $3.5 billion to construct. This is more than double NNSA’s initial 2004 estimate that was prepared at critical decision 0 of between $600 million and $1.1 billion. Cost estimates for project engineering and design, which are less than halfway completed, have already increased by about 42 percent—from $297 to $421 million. According to UPF project officials, these increases are the result of, among other things, changes in engineering and design pricing rates. In January 2010, we reported that NNSA’s current cost estimate for the UPF that was prepared in 2007 at critical decision 1 did not meet all cost estimating best practices because it did not exemplify the characteristics of a high-quality cost estimate. As identified by the professional cost- estimating community in our Cost Estimating and Assessment Guide, a high-quality cost estimate is credible, well documented, accurate, and comprehensive. However, our January 2010 report found that the UPF’s current cost estimate prepared in 2007 only partially or somewhat met these four characteristics. For example, we found the UPF cost estimate only somewhat credible because an independent cost estimate had not been conducted. Instead, the project received an independent cost review as part of an independent technical review. An independent cost review is less rigorous than an independent cost estimate because it only addresses the cost estimate’s high-value, high-risk, and high-interest aspects without evaluating the remainder of the estimate. Moreover, we found the UPF cost estimate was only somewhat accurate because it was not based on a reliable assessment of costs most likely to be incurred. The UPF cost estimate used an estimating methodology that was not appropriate for a project whose design was not stable and that was still anticipated to change. NNSA’s technical independent review of the UPF stated that the project’s cost-estimate range was unsupported in part because it was prepared with significant detail—for example, the estimate provided a count of pipings and fittings for the facility—despite the fact that there had been no design of technical systems or of the building on which to base these details. Our January 2010 report recommended, among other things, that DOE follow best practices and conduct an independent cost estimate for all major projects. In response to our recommendation and recent congressional committee direction, DOE’s Office of Cost Analysis is conducting an independent cost estimate on the UPF project before critical decision 2—approval of a formal cost and schedule performance baseline. This independent cost estimate is expected to be completed by the end of 2010. While this independent cost estimate may be used by NNSA headquarters officials as part of its process for approving the project’s performance baseline, it is uncertain the extent to which Y-12 officials will accept the independent cost estimate results as reliable. Specifically, NNSA Y-12 project officials told us that the independent cost estimate will be based, in large part, on a subjective assessment of the independent cost estimating team’s past experiences on similar construction projects. This is in contrast to the cost estimate prepared by the UPF project that is based on a detailed breakdown of the estimated prices of labor and materials specific to the UPF. Project officials noted that DOE’s Office of Cost Analysis currently has no formal process for reconciling the two estimates given their different approaches. However, officials from DOE’s Office of Cost Analysis told us that the independent cost estimate will be compared to the scheduled work and construction requirements specific to the UPF to understand what assumptions and cost elements are causing differences, if any, between the two estimates. According to these officials, this comparison will enable them and NNSA Y-12 project officials to understand cost risks for the project and determine how to address these issues. In addition, DOE is in the process of developing draft policy that is expected to help establish requirements and responsibilities for developing cost estimates for programs and performing independent estimates for program and project cost estimates. However, the current version of the draft policy does not specifically address how differing cost estimates should be reconciled. According to NNSA officials, efficiency gains resulting from consolidating facilities at the Y-12 plant are likely to result in a savings of about $200 million annually in operations, maintenance, security, and other costs. For example, NNSA estimates it will save $54 million annually from the large reduction in the UPF’s security perimeter when compared to the security perimeter around existing uranium processing facilities at the Y-12 plant. NNSA estimates cost savings will also result from the smaller amount of hazardous and radioactive waste the UPF will generate as compared to the existing facilities. However, these savings may not begin to be realized until the transition between existing uranium processing facilities at the Y-12 plant and the new UPF is complete because both may need to operate simultaneously for an indeterminate period until the old facilities are decontaminated and decommissioned. For example, the Y-12 plant may need to continue to maintain some security in and around the old uranium processing facilities for some time after the UPF is built and operating because significant quantities of enriched uranium could still be present in the old facilities’ piping and processing equipment during decontamination and decommissioning. According to NNSA officials, security measures in the old facilities can be significantly reduced once enriched uranium inventories are transferred to the UPF. In addition, unknown quantities of hazardous and radioactive waste will continue to be generated during the cleanup of the old facilities—prior to demolishing them—that will need to be treated and disposed, and potentially secured. NNSA’s current estimate prepared in 2007 at critical decision 1 indicates that the UPF construction will be completed as early as 2018 and as late as 2022. However, NNSA officials currently expect the UPF will not be completed before 2020 due to funding shortfalls. We have previously reported on DOE’s use of unrealistic funding estimates while establishing cost and schedule baselines––a risk that also applies to NNSA major construction projects. In addition, as discussed earlier, DOE’s own root cause analysis of its contract and project management problems found that the department, among other things, fails to request and obtain full project funding. Consistent with our prior work and DOE’s analysis, a 2007 technical independent review on the UPF project found a large disconnect between the funding available in NNSA’s annual spending plan and the assumed annual funding levels in the UPF cost estimate. Specifically, the review found that planned funding levels for fiscal years 2006 through 2008 did not meet the funding needs for the amount of work planned for those years. Despite this early warning of funding risks, NNSA officials approved the initial project cost range a few months after this technical review. Moreover, with the submission of the President’s budget for fiscal year 2010, NNSA officials anticipate a funding shortfall of nearly $200 million in fiscal year 2011 between what NNSA estimated the UPF project needed and what NNSA included in its budget request to Congress. NNSA officials said that this shortfall will likely delay project milestones and ultimately delay the UPF’s estimated project completion from as early as 2018 to at least 2020 or later. This delay could, in turn, increase project costs. Potential funding shortfalls in subsequent years have also been identified as an ongoing high risk by project officials, which could result in additional unknown project delays and cost increases. To address this concern about funding shortfalls, NNSA requested an internal review in February 2010 to ensure that UPF project funding expectations from fiscal years 2012 through 2016 are reasonable. According to NNSA’s briefing on the results of the review, NNSA’s funding analyses appears to have addressed only whether the project would likely be able to spend the funds it requests in fiscal years 2012 and 2013. Importantly, the analysis appears to be incomplete because it (1) covers only 2 years and (2) does not address whether NNSA can realistically provide needed UPF funding given other NNSA priorities, such as other construction projects that will compete for funds in the same years. For example, according to NNSA’s Future Years Nuclear Security Program accompanying the DOE’s fiscal year 2011 congressional budget request, NNSA expects to request about $305 million in fiscal year 2012 to fund the Chemistry and Metallurgy Research Facility Replacement project at the Los Alamos National Laboratory, while requesting about one-third that amount—about $105 million––for the UPF. Without assurance that NNSA mission priorities and its funding plans have been closely aligned with the UPF project’s assumed annual funding levels, the UPF’s cost and schedule estimates may not be credible. NNSA is developing 10 new technologies to install in the UPF and is using a systematic approach to gauge their maturity; however, NNSA may lack assurance that all technologies will work as intended before making key project decisions in accordance with best practices and our prior recommendations. If critical technologies do not work as intended, project officials may have to revert to existing or alternate technologies, which may result in higher costs and schedule delays. NNSA is developing 10 advanced uranium processing and nuclear weapons component production technologies for the UPF that, according to NNSA officials, will be more effective and efficient than existing technologies and that will reduce the hazards workers face at the Y-12 plant. (See table 1.) NNSA uses both chemical and metalworking processes and technologies to perform its work in the existing aging facilities at the Y-12 plant. For example, NNSA uses chemicals and other means to recover enriched uranium from disassembled components and other scrap or salvaged materials in NNSA’s inventory. Once the uranium is recovered, it can be transformed into other forms, including powder-like enriched uranium oxide or uranium metal suitable for storage. In addition, NNSA uses enriched uranium metalworking processes to, among other things, prepare new or refurbished nuclear weapons components. For example, metalworking processes can include heating the uranium into liquid form so it can be poured into casts to create a variety of needed components. Metalworking processes also include machining operations where the uranium metal is cut on special tools at high speeds to create needed enriched uranium shapes. However, existing technologies at the Y- 12 plant have become outdated, resulting in lesser levels of efficiency than would be possible with newer technologies. Existing technologies also expose workers to greater hazards because, for example, current machining operations are largely exposed and not automated, placing operators in greater contact with hazardous and radioactive materials. Among the new technologies NNSA is developing are new chemical processing technologies for the UPF to address problems associated with current chemical processing technologies. For example: Bulk metal oxidation. This new technology for converting bulk uranium metal into a powder-like oxide will eliminate some intermediate processing steps in use at the Y-12 plant. The technology is expected to reduce the size of facilities needed for chemical processing and lessen workers’ exposure to radiation and other hazards, among other things. Saltless direct oxide reduction. This new technology is expected to convert uranium dioxide into uranium metal, which would eliminate the use of some materials and processes that NNSA considers potentially hazardous to workers. NNSA also plans to develop new metalworking technologies to produce uranium-related components at the UPF, including: Microwave casting. This technology uses microwave energy to heat uranium metal so that it can be poured into molds to produce various forms. It will replace an existing heating and casting process and is expected to be more effective, cost less to operate, and reduce the operator’s exposure to uranium, according to NNSA officials. Agile machining. This technology consists of a system that combines multiple machining operations into a single, automated process. This new process is expected to improve worker safety by minimizing exposure to radioactive metal particles because all of the work will be performed within a sealed enclosure called a glovebox. Chip management. Among one of four subsystems of agile machining, NNSA is developing this technology as another means to achieve improved worker safety. For example, the new technology will replace manual operator tasks with a process that automatically collects uranium shavings, or chips. NNSA hopes this technology will help to minimize operator exposure to uranium. Over the past several years, we have stressed the importance of assessing technology readiness to complete projects successfully, while avoiding cost increases and schedule delays. Specifically, in 1999 and 2001, we reported that organizations using best practices recognize that delaying the resolution of technology problems until construction can result in at least a 10-fold cost increase. We also reported that an assessment of technology readiness is even more crucial at critical decision points in the project, such as approving a formal cost and schedule performance baseline, so that resources can be committed toward technology procurement and facility construction. Proceeding through these critical decision points without a credible and complete technology readiness assessment can lead to problems later in the project because the early warning of potential upcoming technology difficulties it provides would not be available to project managers. To ensure that the UPF’s new technologies are sufficiently mature in time to be used successfully, NNSA is using a systematic approach— Technology Readiness Levels (TRL)—for measuring the technologies’ technical maturity. TRLs were pioneered by the National Aeronautics and Space Administration (NASA) and have been used by the Department of Defense (DOD) and other agencies in their research and development efforts for several years. DOE and NNSA adopted the use of TRLs agencywide in response to our March 2007 report that recommended that DOE develop a consistent approach to assessing technology readiness. As shown in table 2, TRLs are assigned to each critical technology on a scale from a TRL 1, which is the least mature, through TRL 9—the highest maturity level where the technology as a total system is fully developed, integrated, and functioning successfully in project operations. Appendix II provides additional detailed information on TRLs. According to best practices we identified in our 2007 report, TRLs are useful because they: provide project managers with a method for measuring and communicating technology maturity levels from a project’s design to its construction; provide a common language for project stakeholders, revealing any gaps between a technology’s current and needed readiness; assist in decision-making and ongoing project management; increase the transparency of risk acceptance to identify technologies that most need resources and time; and reduce the risk of investing in technologies that are too immature. NNSA has made progress using TRLs to gauge the maturity of critical new UPF technologies; however, based on discussions with NNSA and contractor officials and our analysis of NNSA documents, NNSA does not expect to have optimal assurance as defined by best practices that 6 of the 10 new technologies being developed for UPF will work as intended before key project decisions are made. According to best practices we identified in our 2007 report, achieving an optimal level of assurance— reaching specific TRL levels to provide assurance that the technologies will work as intended—prior to making critical decisions can mitigate the risk that new or experimental technologies will not perform as intended, which can result in costly design changes and construction delays. DOE’s guidance on the use of TRLs recommends that new technologies achieve a TRL 6—the level where a prototype is demonstrated in a relevant or simulated environment and partially integrated into the system—by the time of critical decision 2—approval of a formal cost and schedule baseline for the project. This is consistent with practices of other federal agencies such as the Department of Defense (DOD). Most of the technologies NNSA is developing are expected to reach TRL 6 or higher by the time NNSA approves a formal cost and schedule performance baseline for installing this equipment in the UPF in July 2012. For example, the new microwave casting technology is already at TRL 7. According to NNSA officials, NNSA has recently installed microwave casting technology in existing facilities at the Y-12 plant to demonstrate that it will heat enriched uranium as designed in an actual operational environment. As a result, NNSA will have high assurance that this technology will work as intended prior to approving the UPF’s formal cost and schedule performance baseline. However, NNSA does not expect to achieve the required levels of readiness for another key technology. Specifically, based on discussions with NNSA and contractor officials and our analysis of NNSA documents, NNSA does not expect one critical technology it is developing—agile machining—to reach TRL 6 until 18 months after approval of the project’s cost and schedule performance baseline. Nevertheless, NNSA plans to approve its performance baseline with less than optimal assurance that this technology will work as intended. NNSA officials told us they have developed plans to address risks resulting from this technology readiness gap. Specifically, NNSA developed a technology maturation plan in early 2010 to track technology development and engineering activities needed to bring the agile machining technology to TRL 6. DOE’s guidance on the use of TRLs is inconsistent with best practices used by DOD and with our previous recommendations with regard to technology readiness at another critical decision—start of construction. Specifically, DOD recommends that technologies reach TRL 7—the level where a prototype is demonstrated in an operational environment—prior to beginning its production and deployment phase, or the equivalent of beginning construction on a DOE project. Similarly, in 2007, we recommended that DOE construction projects demonstrate TRL 7 or higher before construction. Reaching this level indicates that the technology prototype has been demonstrated in an operating environment, has been integrated with other key supporting subsystems, and is expected to have only minor design changes. Nevertheless, DOE’s guidance does not require technologies to advance from TRL 6 to TRL 7 between the approval of a formal cost and schedule baseline and the beginning of construction. Six of the 10 technologies NNSA is developing are not expected to reach TRL 7 before UPF construction begins. In the case of agile machining technology, NNSA expects that the technology will have only achieved a TRL 6 by December 2014 by the time of its expected procurement—1 full year after construction of the UPF is expected to begin in December 2013. Table 3 provides details on the current TRL for the 10 technologies, the TRL expected by the approval of a formal cost and schedule baseline in July 2012, the TRL expected by the start of construction in December 2013, and whether the expected TRLs meet best practices. Because all of the technologies being developed for the UPF will not achieve optimal levels of readiness prior to project critical decisions, NNSA may lack assurance that all technologies will work as intended. This could force the project to revert to existing or alternate technologies, which could result in design changes, higher costs, and schedule delays. In addition, other problems have occurred. For example, NNSA recently downgraded special casting technology from TRL 4 to TRL 3 because, according to UPF officials, unexpected technical issues occurred that required additional research and testing to resolve. Although officials expect this technology to be at TRL 6 by the time a formal cost and schedule baseline is approved in July 2012, it is not expected to reach TRL 7 before construction begins in December 2013. A June 2010 NNSA management review of the UPF also noted that continued demonstration and testing of UPF technologies is still necessary. The review stated that, because current operations in the Y-12 plant are expected to continue for over a decade longer, there appears to be a significant opportunity to demonstrate and test new technologies in an integrated fashion in the existing facility prior to installing them in the new facility. The review also noted that, if some technologies do not work as intended, it is not clear whether the current UPF design can accommodate the only identified alternative—to revert back to existing technologies. Furthermore, it noted that even with significant additional UPF investment, modifying the UPF’s design could further delay the project. In such an event, the review concluded that continued operation of existing facilities at the Y-12 plant is NNSA’s only strategy for addressing such delays. According to NNSA officials and an independent study commissioned by NNSA, emerging changes in the composition and size of the nuclear weapons stockpile as a result of changes in the nation’s nuclear strategy or a proposed arms treaty with Russia should have relatively minor effects on the UPF project. The UPF’s design is based on ensuring the facility has (1) sufficient capability––the space and equipment necessary to process enriched uranium and to produce the specific components for each type of weapon in the stockpile; and (2) sufficient capacity––the space and equipment necessary to produce the required quantities of components for the stockpile. As such, the elimination of a particular weapon type from the stockpile could eliminate some capability requirements in the UPF’s design. Similarly, a reduction in the total number of weapons in the stockpile could reduce some capacity requirements in the UPF’s design. Changes in the composition and size of the stockpile could occur as a result of changes in the nation’s nuclear strategy. Specifically, the April 2010 Nuclear Posture Review—the third comprehensive assessment of U.S. nuclear policy and strategy conducted since the end of the Cold War and conducted by the Secretary of Defense in consultation with the Secretaries of State and Energy—provides a roadmap for implementing the President’s agenda for reducing nuclear risks and describes how the United States will reduce the role and numbers of nuclear weapons in the nation’s nuclear security strategy, among other things. For example, the review recommended studying the feasibility of using W-78 warheads that are currently used on intercontinental ballistic missiles on submarine- launched ballistic missiles. If this occurs, existing warheads used on submarine-launched ballistic missiles could be eliminated from the stockpile. According to the review, implementing the steps outlined in the report to reduce the role and numbers of nuclear weapons will take years and, in some cases, decades to complete. In addition, the New Strategic Arms Reduction Treaty (New START) signed in April 2010 by the leaders of the United States and Russia would, if ratified, reduce the number of deployed strategic warheads from about 2,200 to 1,550. This treaty would replace the now-expired 1991 START I treaty and supercede the 2002 Strategic Offensive Reductions Treaty—also known as the Moscow Treaty—which expires in 2012. Further decreases in the size of the stockpile beyond those resulting from the New START treaty may also be possible. For example, the Nuclear Posture Review recommended a follow-on analysis to set goals for further warhead reductions. NNSA officials told us that changes in the composition and size of the nuclear weapons stockpile should have relatively minor effects on the UPF project. Specifically, NNSA officials told us that they cooperated closely with DOD during the development of the Nuclear Posture Review and that several changes resulting from the review have already been incorporated into the UPF design. In particular, NNSA recently revised its primary project requirements document to accommodate expected changes in the composition and size of the nuclear weapons stockpile resulting from the Nuclear Posture Review and has already begun work to modify the UPF design to incorporate these changes. NNSA officials told us that changes made as a result of the close collaboration with DOD have helped to mitigate negative impact on the UPF project. In addition, while NNSA has not formally studied the potential impact on the UPF if specific nuclear weapon types were eliminated, NNSA officials told us that such changes would likely not eliminate the need for capabilities currently designed into the UPF. Specifically, they said that if a warhead type were eliminated from the stockpile, the UPF’s capabilities to produce a particular component for that specific warhead could potentially be eliminated from the project design. According to NNSA officials, because many of the UPF’s capabilities will be used for common uranium chemical processing and component production operations, they therefore, are not limited to producing components for only one type of warhead. As a result, eliminating one type of warhead from the nuclear stockpile would not necessarily result in the elimination of a specific capability from the UPF’s design because that capability could be needed for producing a wide range of other warhead types. For example, NNSA officials stated that replacing existing submarine-launched ballistic missile warheads with the W-78 intercontinental ballistic missile warhead would not significantly impact the UPF’s design because this action would be unlikely to eliminate the need for equipment that is already planned to be installed in the UPF. Moreover, an independent study commissioned by NNSA examining the UPF’s space and major equipment needs concluded that changes in the size of the stockpile would result in relatively little change to the UPF’s space and equipment design plans. The study stated that establishing sufficient capability to meet minimum stockpile composition requirements––the ability to process enriched uranium and produce components for at least one of each weapon type in the stockpile— accounts for about 90 percent of the project’s planned space and major equipment. Specifically, establishing minimum capabilities to, among other things, recover and process enriched uranium; produce, assemble, and dismantle nuclear weapons components; and produce fuel for naval nuclear reactors accounts for 91 percent of the facility’s space and 89 percent of the UPF’s major equipment. Only 9 percent of the UPF’s space and 11 percent of the facility’s major equipment are needed to ensure sufficient capacity to produce the necessary quantities of components to meet the requirements of the nuclear weapons stockpile. In other words, once the minimum capability is established, the overall impact on the project of modifying capacity to respond to changes in the size of the stockpile should be relatively minor. NNSA officials told us that adding or subtracting capacity can be addressed to a large degree by simply adding or subtracting work shifts on existing equipment. When completed, the UPF will play an important role in ensuring the continued safety and reliability of the U.S. nuclear weapons stockpile. By replacing old, deteriorating, and high-maintenance facilities at the Y-12 plant, the UPF offers NNSA an opportunity to improve efficiency, save costs, and reduce hazards faced by workers at the plant. Because of its importance and given the size, scope, and expense of the project, it is critical that NNSA and Congress have accurate estimates of the project’s costs and schedules. However, cost increases and potential schedule delays raise concerns about NNSA’s ability to construct the facility within its cost and schedule goals. In particular, NNSA’s lack of a high-quality cost estimate for the project and its inability to consistently request and obtain sufficient project funding is consistent with the problems we discussed in our prior reports on DOE’s difficulties in contract and project management, as well as the findings of DOE’s own root cause analysis of this issue. NNSA is taking steps to provide independent assurance of the accuracy of its cost estimates for the UPF project. However, although DOE is developing draft cost estimating policy, NNSA lacks guidance for reconciling differences between the results of independent cost estimates and other project cost estimates. Moreover, NNSA’s decision to approve an initial project cost range immediately after a 2007 technical review warned of a disconnect between the UPF project’s funding requirements and NNSA’s future years’ spending plan, and then requesting $200 million less in fiscal year 2011 than the UPF project estimated it needed, raises concerns that NNSA is not placing sufficient high-level management focus on ensuring that UPF’s cost and schedule estimates, and the associated funding plans these estimates are based upon, are consistent with NNSA’s broader plans for funding the nation’s nuclear weapons complex. Managing a construction project of this type––particularly one that relies on several new or experimental technologies––is inherently challenging, and it is encouraging that NNSA is taking steps to manage the development of these technologies. For example, NNSA’s early use of TRLs has already proven to be helpful in its efforts to mature these technologies. However, we are concerned because NNSA does not expect to achieve optimal assurance as defined by best practices that all 10 of these technologies will work as intended before key project decisions are made. Furthermore, because DOE’s guidance for using TRLs is inconsistent with our prior recommendations as well as best practices followed by other federal agencies, DOE may be making critical decisions with less confidence that new technologies will work as intended than other agencies in similar circumstances. As a result, NNSA may be forced to modify or replace some technologies, which could result in costly and time-consuming redesign work. Moreover, Congress may not be aware that NNSA may be making critical decisions to proceed with construction projects without first ensuring that new technologies reach the level of maturity called for by best practices. GAO is making five recommendations to improve NNSA’s management of project funding and technology associated with the UPF project. To improve DOE’s guidance for estimating project costs and developing new technologies, we recommend that the Secretary of Energy take the following two actions: Include in the cost estimating policy currently being developed by DOE specific guidance for reconciling differences, if any, between the results of independent cost estimates and other project cost estimates. Evaluate where DOE’s guidance for gauging the maturity of new technologies is inconsistent with best practices and, as appropriate, revise the guidance to ensure consistency or ensure the guidance contains justification why such differences are necessary or appropriate. To improve NNSA’s management of the UPF project, we recommend that the Secretary of Energy take the following three actions: Direct the Administrator of NNSA to ensure that UPF’s cost and schedule estimates, and the associated funding plans these estimates are based upon, are consistent with NNSA’s future years’ budget and spending plan prior to approval of the UPF’s performance baseline at critical decision 2. Direct the Administrator of NNSA to ensure new technologies being developed for the UPF project reach the level of maturity called for by best practices prior to critical decisions being made on the project. In the event technologies being developed for the UPF project do not reach levels of maturity called for by best practices, inform the appropriate committees and Members of Congress of any NNSA decision to approve a cost and schedule performance baseline or to begin construction of UPF without first having ensured that project technologies are sufficiently mature. We provided a draft of this report to NNSA for its review and comment. In its written comments, NNSA generally agreed with the report and our recommendations. NNSA stated that the UPF project is vitally important to the continued viability of NNSA’s nuclear missions and is a top priority in its strategic planning efforts to transform outdated nuclear weapons infrastructure into a smaller, more modern nuclear security enterprise. NNSA stated in its comments that its contractor has prepared an updated cost estimate that will be reflected in the President’s fiscal year 2012 budget request and that independent cost estimates are being prepared in support of upcoming critical decisions for the UPF project. In addition, NNSA stated that it will work with DOE’s Office of Engineering and Construction Management to ensure guidance on the reconciliation of cost estimates is incorporated in a new DOE cost estimating guide. Consistent with our recommendation, NNSA recognized in its comments the importance of having specific guidance on reconciling differences between the results of independent cost estimates and other project cost estimates. Regarding its development of new technologies for the UPF, NNSA stated in its comments that our report does not discuss the risk management process used for the UPF project to manage technology risks and the many other risks for a project of this complexity and duration. NNSA is incorrect on this point. Our draft report discussed a number of steps NNSA is taking to mitigate technology risks. For example, our draft report noted that NNSA developed a technology maturation plan in early 2010 to track technology development and engineering activities needed to bring the agile machining technology to TRL 6. NNSA also noted that TRL 6, as used by the UPF project in accordance with DOE guidance, has been judged to be an appropriate level of assurance that the technologies will work as intended when the final design of the project is complete and construction is ready to begin. Nevertheless, as our draft report noted, DOE’s guidance on the use of TRLs is inconsistent with best practices used by DOD and with our previous recommendations with regard to technology readiness at the start of facility construction. Specifically, DOD recommends that technologies reach TRL 7—the level where a prototype is demonstrated in an operational environment—prior to beginning its production and deployment phase, or the equivalent of beginning construction on a DOE project. Similarly, we have previously recommended that DOE construction projects demonstrate TRL 7 or higher before construction. Reaching this level indicates that the technology prototype has been demonstrated in an operating environment, has been integrated with other key supporting subsystems, and is expected to have only minor design changes. However, DOE’s guidance does not require technologies to advance from TRL 6 to TRL 7 between the approval of a formal cost and schedule baseline and the beginning of construction. Our recommendation that DOE evaluate its guidance to ensure conformance with best practices is intended to address these inconsistencies. NNSA also provided technical comments that we incorporated in the report as appropriate. NNSA’s written comments are presented in appendix III. We are sending copies of this report to the appropriate congressional committees; Secretary of Energy; Administrator of NNSA; Director, Office of Management and Budget; and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to (1) assess the National Nuclear Security Administration’s (NNSA) estimated cost and schedule for constructing the Uranium Processing Facility (UPF) at the Y-12 National Security Complex in Oak Ridge, Tennessee; (2) determine the extent to which the UPF will use new, experimental technologies and any risks to the project’s cost and schedule of replacing the existing, proven technologies; and (3) determine the extent to which emerging changes in the stockpile could affect the UPF project. To assess NNSA’s estimated cost and schedule for constructing the UPF, we visited the Y-12 plant and toured existing facilities as well as the proposed location of UPF. We also reviewed NNSA and contractor documents describing the project’s cost and schedule estimates, budget documents, recent design-related cost and schedule performance, and documents potentially showing cost and schedule implications for the future. We also interviewed officials at NNSA’s Y-12 Site Office and NNSA’s contractor for the Y-12 plant—Babcock & Wilcox Technical Services Y-12, LLC. To determine whether cost increases have occurred to date, we compared initial estimates for key activities, such as project engineering and design, with current estimates. We also obtained and reviewed NNSA documents describing the events that contributed to the cost increases, a Department of Energy (DOE) order on project management, and a draft DOE order on cost estimating. We also used our January 2010 report that evaluated the UPF’s cost estimates for compliance with industry cost estimating best practices. We also obtained information on the independent cost estimate DOE’s Office of Cost Analysis is conducting on the UPF project. Because NNSA’s design of the UPF is less than halfway completed and because it has not yet established a formal cost and schedule performance baseline, current cost estimates are still considered to be preliminary and subject to change. Given this limitation, however, our analysis is meant to provide context for the condition of the current, pre-baselined cost and schedule estimate and to describe actions underway and planned to ensure the credibility of the formal cost and schedule performance baseline currently being developed. To determine the extent to which UPF will use new, experimental technologies and any risks to the project’s cost and schedule of replacing the existing, proven technologies, we determined which critical technologies NNSA plans to use in UPF that are new or experimental. We visited the Y-12 plant to observe research and development activities associated with the technologies and reviewed agency and contractor documents, including NNSA technology readiness reports and an independent study examining technology-related project risks. In addition, we interviewed key NNSA and Y-12 plant officials responsible for developing UPF technologies. To determine the extent to which NNSA was using industry best practices to ensure that new technologies will work as intended, we used best practices previously identified in our prior work and that are used by other federal agencies. Specifically, best practices call for using a systematic method—Technology Readiness Levels (TRL), developed by the National Aeronautics and Space Administration (NASA) and used by other federal agencies such as the Department of Defense (DOD)—to determine the extent to which new technologies are sufficiently mature at key project decisions. TRL’s use a scale to rate relative technology maturity on a scale from 1—being the least mature—to 9—representing the most mature ranking, where the technology has been demonstrated to work as intended in an operational environment. For each critical UPF technology, we obtained information from NNSA and UPF project officials on the current TRLs associated with each technology and compared them to optimal TRLs identified by best practices and DOE guidance on the use of TRLs. For technologies that are not expected to reach optimal TRL levels as identified by best practices and/or DOE guidance, we obtained information on NNSA’s risk mitigation plans and its time frames for continuing research and development of the technologies. We also discussed with NNSA and UPF project officials the challenges that have been experienced or that they expect to encounter in the future. Finally, we compared NNSA’s technology risk assessments with independent studies evaluating the maturity of planned UPF technologies. To determine the extent to which emerging changes in the stockpile could affect the UPF project, we visited the Y-12 plant and reviewed agency and contractor documents describing the key factors NNSA considered in developing the UPF’s design in order to meet nuclear weapons stockpile requirements. In addition, we toured enriched uranium processing and nuclear weapons component facilities. We obtained the April 2010 Nuclear Posture Review issued by DOD and reviewed the proposed New Strategic Arms Reduction Treaty (New START) that was signed by the United States and Russia in April 2010. We also interviewed key NNSA and contractor officials to understand how changes in the composition and size of the nuclear weapons stockpile might affect the UPF’s design. To ensure the reliability of the information we obtained from the UPF project officials, we obtained an independent perspective on the UPF’s design through discussions with officials at the Los Alamos National Laboratory and Lawrence Livermore National Laboratory. These two nuclear weapons laboratories design the enriched uranium components that are currently produced at Y-12 and will be produced at the UPF. We also reviewed an independent study commissioned by NNSA examining the UPF’s space and major equipment needs. We met with the study’s principal author and discussed the study’s findings to determine how UPF’s design is integrated with nuclear weapons stockpile requirements and how emerging changes in the stockpile could affect the UPF project. We conducted this performance audit from November 2009 through October 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Research to prove feasibility. None. None. Desktop, “back of envelope” environment. Research to prove feasibility. None. Paper studies indicate components ought to work together. Academic environment. The emphasis here is still on understanding the science but beginning to think about possible applications of the scientific principles. Research to prove feasibility. No system components, just basic laboratory research equipment to verify physical principles. No attempt at integration; still trying to see whether individual parts of the technology work. Lab experiments with available components show they will work. Uses of the observed properties are postulated and experimentation with potential elements of subsystem begins. Lab work to validate pieces of technology without trying to integrate. Emphasis is on validating the predictions made during earlier analytical studies so that we’re certain that the technology has a firm scientific underpinning. Demonstrate technical feasibility and functionality. Ad hoc and available laboratory components are surrogates for system components that may require special handling, calibration, or alignment to get them to function. Not fully functional but representative of technically feasible approach. Available components assembled into subsystem breadboard. Interfaces between components are realistic. Tests in controlled laboratory environment. Lab work at less than full subsystem integration, although starting to see if components will work together. Demonstrate technical feasibility and functionality. Fidelity of components and interfaces are improved from TRL 4. Some special purpose components combined with available laboratory components. Functionally equivalent but not of same material or size. May include integration of several components with reasonably realistic support elements to demonstrate functionality. Fidelity of subsystem mock up improves (e.g., from breadboard to brassboard). Integration issues become defined. Laboratory environment modified to approximate operational environment. Increases in accuracy of the controlled environment in which it is tested. Demonstrate applicability to intended project and subsystem integration. (Specific to intended application in project.) Subsystem is high fidelity functional prototype with (very near same material and size of operational system). Probably includes the integration of many new components and realistic supporting elements/ subsystems if needed to demonstrate full functionality. Partially integrated with existing systems. Components are functionally compatible (and very near same material and size of operational system). Component integration into system is demonstrated. Relevant environment inside or outside the laboratory, but not the eventual operating environment. The testing environment does not reach the level of an operational environment, although moving out of controlled laboratory environment into something more closely approximating the realities of technology’s intended use. Demonstrate applicability to intended project and subsystem integration. Prototype improves to preproduction quality. Components are representative of project components (material, size, and function) and integrated with other key supporting elements/subsyste ms to demonstrate full functionality. Accurate enough representation to expect only minor design changes. Prototype not integrated into intended system but onto surrogate system. (Specific to intended application in project.) Operational environment, but not the eventual environment. Operational testing of system in representational environment. Prototype will be exposed to the true operational environment on a surrogate platform, demonstrator, or test bed. Applied/integrated into intended project application. Components are right material, size and function compatible with operational system. Subsystem performance meets intended application and is fully integrated into total system. Demonstration, test, and evaluation completed. Demonstrates system meets procurement specifications. Demonstrated in eventual environment. Applied/integrated into intended project application. Components are successfully performing in the actual environment— proper size, material, and function. Subsystem has been installed and successfully deployed in project systems. Operational testing and evaluation completed. Demonstrates that system is capable of meeting all mission requirements. In addition to the contact named above, Ryan T. Coles, Assistant Director; John Bauckman; Virginia Chanley; Don Cowan; James D. Espinoza; Jonathan Kucskar; Alison O’Neill; Christopher Pacheco; and Tim Persons made key contributions to this report. | Built in the 1940s and 1950s, the Y-12 National Security Complex, located in Oak Ridge, Tennessee, is the National Nuclear Security Administration's (NNSA) primary site for enriched uranium activities. Because Y-12 facilities are outdated and deteriorating, NNSA is building a more modern facility--known as the Uranium Processing Facility (UPF). NNSA estimates that the UPF will cost up to $3.5 billion and save over $200 million annually in operations, security, and maintenance costs. NNSA also plans to include more advanced technologies in the UPF to make uranium processing and component production safer. GAO was asked to (1) assess NNSA's estimated cost and schedule for constructing the UPF; (2) determine the extent to which UPF will use new, experimental technologies, and identify resultant risks, if any; and (3) determine the extent to which emerging changes in the nuclear weapons stockpile could affect the UPF project. To conduct this work, GAO reviewed NNSA technology development and planning documents and met with officials from NNSA and the Y-12 plant. The UPF project costs have increased since NNSA's initial estimates in 2004 and construction may be delayed due to funding shortfalls. NNSA's current estimate prepared in 2007 indicates that the UPF will cost between $1.4 and $3.5 billion to construct--more than double NNSA's 2004 estimate of between $600 million and $1.1 billion. In addition, costs for project engineering and design, which are less than halfway completed, have increased by about 42 percent--from $297 to $421 million--due in part to changes in engineering and design pricing rates. With regard to the project's schedule, NNSA currently estimates that UPF construction will be completed as early as 2018 and as late as 2022. However, because of a funding shortfall of nearly $200 million in fiscal year 2011, NNSA officials expect that the UPF will not be completed before 2020, which could also result in additional costs. NNSA is developing 10 new technologies for use in the UPF and is using a systematic approach--Technology Readiness Levels (TRL)--to gauge the extent to which technologies have been demonstrated to work as intended. Industry best practices and Department of Energy (DOE) guidance recommend achieving specific TRLs at critical project decision points--such as establishing a cost and schedule performance baseline or beginning construction--to give optimal assurance that technologies are sufficiently ready. However, NNSA does not expect all 10 new technologies to achieve the level of maturity called for by best practices before making critical decisions. For example, NNSA is developing a technology that combines multiple machining operations into a single, automated process--known as agile machining--but does not expect it to reach an optimal TRL until 18 months after one of UPF's critical decisions--approval of a formal cost and schedule performance baseline--is made. In addition, DOE's guidance for establishing optimal TRLs prior to beginning construction is not consistent with best practices or with our previous recommendations. As a result, 6 of 10 technologies NNSA is developing are not expected to reach optimum TRLs consistent with best practices by the time UPF construction begins. If critical technologies fail to work as intended, NNSA may need to revert to existing or alternate technologies, possibly resulting in changes to design plans and space requirements that could delay the project and increase costs. Changes in the composition and size of the nuclear weapons stockpile could occur as a result of changes in the nation's nuclear strategy, but NNSA officials and a key study said that the impact of these changes on the project should be minor. For example, the New Strategic Arms Reduction Treaty signed in April 2010 by the leaders of the United States and Russia would, if ratified, reduce the number of deployed strategic warheads from about 2,200 to 1,550. According to NNSA officials, NNSA and DOD have cooperated closely and incorporated key nuclear weapons stockpile changes into UPF's design. Also, an independent study found that most of the UPF's planned space and equipment is dedicated to establishing basic uranium processing capabilities that are not likely to change, while only a minimal amount--about 10 percent--is for meeting current stockpile size requirements. GAO is making five recommendations for, among other things, improving the UPF's cost and funding plans, ensuring that new UPF technologies reach optimal levels of maturity prior to critical project decisions, and for improving DOE guidance. NNSA generally agreed with the recommendations. |
On November 7, 2000, more than 100 million Americans cast their votes for various candidates and ballot issues across the country. This hallmark of American democracy unfolded in more than 10,000 local election jurisdictions, which used several different types of voting equipment, ranging from hand-counted paper ballots to electronic touchscreen voting machines. Staffing the precincts were some 1.4 million dedicated poll workers who opened the polls, logged in and assisted voters, closed polls, and in many cases, tabulated the votes cast at the precinct. These poll workers were assisted by an army of unseen election workers who answered phone calls from both voters and poll workers, delivered extra ballots to precincts, replaced or repaired broken voting machines, tabulated absentee ballots, and compiled the election results from individual precincts within their jurisdiction. This highly decentralized, complex, and massive logistical effort made it possible for American citizens to participate in one of the most fundamental democratic traditions—that of eligible citizens to cast their votes for candidates of their choice. The controversy surrounding the presidential vote in the November 2000 election cast America’s election system in a new and revealing light, spotlighting issues such as the accuracy of voter registration lists; procedures used to accept or disqualify absentee ballots for counting; variety of ways elections are administered across and within widely varying types of voting equipment used to cast and count ballots; many different methods of determining voter intent when voters improperly or incompletely marked their ballots. Several congressional leaders, congressional committees, and Members of Congress asked us to review our nation’s election systems. Accordingly, we focused on issues that may affect the ability of eligible U.S. citizens to cast their ballots in private and have those ballots counted accurately. This capping report draws on a considerable body of work recently done by GAO on election systems. We address three main issues that Congress may wish to keep in mind as it approaches election reform. First, we examine the division of federal and state authority to conduct elections and the resulting variation among election jurisdictions. Second, we describe the challenges that election officials face as they work with the people, processes, and technology involved in our administering our nation’s election systems. And third, we suggest four criteria that Congress could use as it weighs the merits of various reform proposals. Our work on election systems, which is contained in this capping report and six separate reports, took us across the United States and around the world as we employed a variety of methods to answer Congress’ questions. We conducted a detailed analysis of relevant constitutional provisions, federal statutes, and federal court decisions as well as state statutes and regulations on selected election issues. We met with and reviewed documents provided by local election officials in 41 election jurisdictions in 22 states and met with officials at the Department of State, the Department of Defense, the Federal Election Commission (FEC), and the National Conference of State Legislatures. We surveyed District of Columbia and state election directors. Moreover, we used a mail survey and a telephone survey and interviews with local election officials to obtain information about the election process that would generally be representative of the more than 10,000 local election jurisdictions in the United States. We also visited 585 polling places and met with embassy and military personnel abroad and overseas citizens as well as with manufacturers and testers of voting equipment. And finally, we reviewed documents provided by state and local election officials, voting equipment manufacturers and testers, and obtained data on voting methods and election results for the November 2000 election from Election Data Services, Inc., and other sources. Election administration in the U.S. is guided by federal and state laws, regulations and policies. Within the broad framework established by the Constitution and federal statutes, each state sets its own requirements for conducting local, state, and federal elections within the state. Consequently, state requirements and processes vary considerably, and the U.S. election system comprises 51 separate election systems. In turn, states typically have decentralized this process so the responsibility for administering and funding elections resides in thousands of local government election jurisdictions, creating even more variability among our nation’s election systems. Thus, in adopting federal election reforms, the degree of flexibility and the timeframes for implementing new initiatives need to be given careful consideration during deliberation and execution of related reforms. The constitutional framework for elections contemplates both state and federal roles. With regard to the administration of federal elections, Congress has certain constitutional authorities over both congressional and presidential elections. Congress has passed legislation relating to the administration of federal elections, under its various constitutional authorities in certain areas, including the timing of federal elections, voter registration, accessibility provisions for the elderly and persons with disabilities, and absentee voting. Congress has, however, been most active with respect to enacting prohibitions against discriminatory voting practices, which apply in the context of both federal and state elections. The Voting Rights Act of 1965, for example, established the constitutional guarantee that no person be denied the right to vote on account of race or color. In addition, subsequent amendments to the Act expanded it to include protections for members of language minority groups, as well as other matters regarding voting registration and procedures. Within the broad framework established by the Constitution and federal statutes, each state sets the requirements for conducting local, state, and federal elections within the state. For example, states regulate such aspects of elections as ballot access, registration procedures, absentee voting requirements, establishment of voting places, provision of election day workers, and counting and certifying the vote. The states, in turn have typically delegated responsibility for administering and funding state election systems to the thousands of local election jurisdictions—more than 10,000 nationwide—creating even more variability among our nation’s election systems. State election codes and regulations may be very specific or very general. In particular, some states have mandated statewide election administration guidelines and procedures that foster uniformity in the way local jurisdictions conduct elections. It is common for state provisions to furnish some guidance regarding voter registration requirements and procedures, absentee voting requirements and procedures, performance requirements for voting methods used within the state, establishment of polling places, provision of election day workers, and the count and certification of the vote. Other states have guidelines that generally permit local election jurisdictions considerable autonomy and discretion in the way they run elections (see figure 1). The variability from state to state becomes even more pronounced at the local level, as local jurisdictions have used the flexibility afforded in state provisions to create local elections systems that vary from county to county, and even, in some cases, within counties. This variation stems from several factors. One factor can be a consequence of the size of local election jurisdictions, which varies considerably. For example, one rural county has 208 registered voters in contrast with a large, urban county, such as Los Angeles County, whose total number of registered voters exceeds that of 41 states. The complexity of preparing for and conducting an election in large jurisdictions is generally greater than in smaller jurisdictions. For example, a rural county, with a few thousand voters who share the same language, prints its ballot in one language. In contrast, in a large, urban jurisdiction with a diverse population of 4 million registered voters prints its ballots in 7 different languages. This can also have an effect on the processes and type of voting equipment used. As illustrated in figure 2, the magnitude of other key administrative tasks in this large, urban jurisdiction is a thousand times larger than for the small jurisdiction. Variability can also be a consequence of local needs. For example, a jurisdiction with a large population segment that moves out of the location each year might opt for certain voter registration and voter education processes that reflect the need to address a large voter turnover. As a second example, a jurisdiction might use a certain type of voting equipment based on financial resource availability. More wealthy jurisdictions have had the resources necessary to modernize equipment, while others need to make do with what they have. Finally, variability can be a consequence of a jurisdiction’s perceived need to maintain voting traditions that have been in place for a long time. In two jurisdictions we visited, election officials opted to replace lever machines with full-screen electronic voting machines because this machine uses a ballot that most closely resembles the type of ballot voters were used to seeing on lever machines. This choice was possible because ballots did not have to be printed in more than two languages in either jurisdiction so the ballots could fit on a single page. Variability among states and local jurisdictions was evident in each major stage of an election--voter registration, absentee and early voting, preparing for and conducting election day activities, and vote counting and certification. Some examples follow. For the November 2000 election, the FEC reported that nearly 168 million people, or about 82 percent of the voting age population, were registered to vote. Registering to vote is not a federal requirement, but in November 2000, all states except North Dakota required citizens to register before voting. At a minimum, every state and the District of Columbia required that a voter be a U.S. citizen, at least 18 years of age, and a resident. Additional requirements to vote, such as time in residence, varied. Due to variations in voter eligibility requirements, different citizens with the same qualifications would be eligible to vote in some states but not in others including (1) those that had completed their sentence after a felony conviction; (2) those who had been adjudged mentally incompetent; and (3) those who met all of the qualifications to vote but who had not registered in accordance with prescribed timeframes. In November 2000, citizens had different opportunities for obtaining and successfully casting absentee ballots due to the differences in absentee and early voting requirements, administration, and procedures. All states allow some provision for absentee balloting; some, however, require a reason to vote absentee as indicated in the figure below. For the November 2000 election, about 1.4 million poll workers staffed polling places across the country on election day. Although poll workers are usually employed for only one day, the success of election administration largely hinges upon their ability to perform their jobs well. Depending on state law and the organization and traditions of the local jurisdiction, poll workers have different titles, levels of pay, training requirements, and responsibilities. Some poll workers are elected, some are appointed, and some volunteer. Levels of authority and responsibility that jurisdictions grant to poll workers also ranged from significant autonomy over the operation of the polling place with final authority to interpret improper ballot markings to having limited discretion, functioning primarily as clerks and facilitators who refer issues and problems back to elections headquarters. Jurisdictions followed various procedures on election day that created differences in the way elections were conducted. For example, to determine whether a citizen who appeared at the polls was eligible to vote, some jurisdictions required voters to identify themselves by stating their names and addresses to the poll workers, who also matched the signature on the voter application with the voter registration records. Other jurisdictions also required voters to present a valid photo identification card. In other jurisdictions, a hunting or fishing license was sufficient to verify one’s identity. Still other jurisdictions required no identification other than the voter stating his or her name. If a voter’s name did not appear on the list of registered voters, some jurisdictions accommodated these individuals by automatically giving them a provisional ballot which may have been counted if the voter’s eligibility was verified at a later time. Others did not. Registered voters cast their ballots using one of five voting methods in the November 2000 election: hand-counted paper ballots and lever, punch card, optical scan, and Direct Recording Electronic (DRE) voting equipment. Punch card and optical scan equipment was most widely used by registered voters, as figure 4 shows. Although the states have traditionally had broad authority to regulate and conduct elections, Congress also has broad authority to regulate federal elections, and, in particular, congressional elections. State and local election officials generally use the same people, processes, and technology to conduct local, state, and federal elections. Consequently, as a practical matter, congressionally directed changes in the conduct of congressional elections are likely to affect the administration of state and local elections. In addition, through the use of its spending power, Congress may encourage state action by attaching conditions to the receipt of federal funds. Our work found wide variations across the country which have developed over time in response to local economic, demographic, political, and cultural traditions in each state and local jurisdiction. As might be expected given this variety, local election officials do not share a common perspective on election reform. For example, when asked their preferences for the use of federal funds, should they become available, local election officials identified a range of spending priorities, with no clear consensus on the top priority. Among these priorities were voter education, voting equipment, poll worker pay, and postage for mailing voting materials and printing ballots. Nor was there a consensus among local election officials on how involved the federal government should be in state and local election administration and reform. About 27 percent of the local jurisdictions we surveyed supported uniform standards for election administration, and 30 percent supported updated federal standards for voting equipment. However, many of the jurisdictions we visited did not want federal funding for election administration if it meant sacrificing local autonomy. Some supported the concept of a federal clearinghouse for sharing information about election administration practices. We estimate that 54 percent of local election officials support federal funding to subsidize postage for materials mailed to voters, 42 percent support federal funding to help with the operational costs of elections, and over 38 percent support federal funding for voter education. Overall, our work suggests that a "one size fits all" approach may not be suitable for every aspect of election administration. In jurisdictions with a small number of voters, for example, hand-counted paper ballots may produce accurate and complete vote counts. Conversely, full-faced DRE equipment (this type of machine uses a large, single-page ballot) could not readily accommodate Los Angeles County's long ballot printed in 7 languages. Thus, the degree of flexibility afforded local jurisdictions in implementing any reform should be given careful consideration when deliberating specific reform initiatives. Historically, changes in election administration have been evolutionary rather than revolutionary. Both election officials and voters become accustomed to and comfortable with how elections are planned and conducted in their individual jurisdictions. Many of the underlying conditions associated with the variations are not likely to change in the short term. Large-scale, immediate changes in the use of people, processes, or technology will not necessarily change those conditions. Thus, appropriate time may be needed for state and local jurisdictions to determine how to implement changes effectively in their specific jurisdictions. The second principal finding that emerged from our work is that election officials face a range of challenges connected with all parts of the election system: people, process, and technology. The people involved in an election include election officials, legions of temporary poll workers, and voters. Processes guide people as they carry out their duties, such as registering voters and conducting the vote. Technology, such as voting equipment, provides tools for officials to administer elections and for voters to participate in them. We estimate that 57 percent of jurisdictions nationwide had major problems in one or more areas on election day. Our work identifies several challenges that can be categorized as primarily people, process, or technology-related issues. Although we classify problems as falling into one of these three areas, what becomes apparent is that a problem in any one of these three broad categories is related in some way to another part of the system. For example, people-related challenges impinge upon process issues, process challenges affect technology, and technology challenges affect people issues. Specifically, several challenges emerged from our work on elections. Issues with recruiting and training poll workers, educating voters, and addressing needs of voters with disabilities posed major people-related challenges in the November 2000 elections. Election officials confronted process-related challenges that included maintaining accurate voter registration lists, completing and processing absentee ballots, and canvassing the vote. Technology-related challenges that faced our nation’s election systems included assessing why voting equipment may fail to meet jurisdictions’ needs, collecting useful performance data to make informed investment decisions, and needing federal leadership in updating and implementing standards for voting equipment. Below we highlight the main challenges identified through discussions with election officials and our analysis. Issues with recruiting and training poll workers, educating voters, and addressing needs of voters with disabilities posed major challenges in the November 2000 elections. We estimate that over half of the jurisdictions in the United States found it somewhat or very difficult to recruit and train a sufficient number of poll workers. The major challenge many jurisdictions identified regarding voter education was finding sufficient funding. On the basis of our mail survey, we estimate that over a third of the jurisdictions nationwide believed that the federal government should provide monetary assistance for voter education programs. Limited availability of accessible buildings and other constraints create obstacles to election officials’ efforts to make polling places accessible to voters with disabilities. We estimate that, from the parking area to the voting booth, 16 percent of all polling places have no potential impediments, 56 percent have one or more potential impediments but offer curbside voting, and 28 percent have one or more potential impediments and do not offer curbside voting. Many election officials told us recruiting and training a sufficient number of poll workers with appropriate skills to open, operate, and close polling places was a major challenge on election day. Factors that can work in concert to complicate an already difficult task for election officials include an aging work force, low pay, and little or no poll worker training. Recruiting Enough Poll Workers Is Difficult for Many On the basis of our mail survey, we estimate that 51 percent of jurisdictions nationwide had a somewhat or very difficult time getting enough poll workers. For these jurisdictions, obtaining enough poll workers (27 percent) was the most frequently identified major problem the jurisdictions faced. We estimate 51 percent of the jurisdictions nationwide reported that it was somewhat or very difficult to get a sufficient number of poll workers. Poll Workers Drawn From Aging Labor Pool Many people who are available for occasional full-day employment as poll workers are older, perhaps retired, and likely attracted to the work because of something other than the pay because poll workers are generally paid low wages. For example, an election official in a small jurisdiction said that over 70 percent of their poll workers are over 65 years old. One official remarked that volunteering is characteristic of an older generation. Several officials echoed the statement of an official in a small jurisdiction that “ur election workforce is aging and we are having difficulty recruiting younger workers.” Low Pay, Long Hours May Discourage Younger Workers The pool of potential poll workers may be shrinking because poll worker pay is inadequate to attract employed or more skilled workers and poll workers often are required to complete a 15- to 18-hour day. One election official reported that “ince compensation for this job is only $80 to $135 per day, depending upon the election district, it is not sufficient to attract a younger workforce.” The length of the day is a complaint of many poll workers and may even pose an obstacle for younger workers. Another official said that, “hat they (the election judges) used to consider as a fun and interesting day and an American duty has become ‘heavy duty’.” In one large jurisdiction, election officials asked poll workers to provide feedback on their experience in the November 2000 election. One poll worker responded that it was “bsolutely, positively too long a day. I am 26 years old and very athletic and still went home at night and fell asleep with my clothes on. With the majority of helpers either older or disabled, I have no idea how they survived the day.” Poll Workers With Specialized Skills Are Often Difficult to Recruit Another problem is addressing the specialized labor needs unique to particular polling sites, according to several local election officials. Some polling places required poll workers to have specific language skills. Finding qualified bilingual workers, specifically workers fluent in Asian languages, is one very large jurisdiction’s biggest recruiting problem. Some places had trouble finding poll workers who are able to learn the technical skills necessary to operate voting equipment. Officials in one very large jurisdiction said they have no scarcity of people willing to serve, but finding people to meet specialized needs is the issue. Obstacles to recruiting poll workers may overlap. One election official wrote that “t is increasingly difficult to find folks to work for $6 an hour. We are relying on older retired persons—many who can’t/won’t keep up with changes in the technology or laws. Many of our workers are 70+.” Minimal Training May Not Have Adequately Prepared Poll Workers for Election Day We estimate that 87 percent of jurisdictions nationwide provided some training for poll workers. Poll worker training courses generally span a few hours’ time and focus on the key processes that poll workers should follow, including how to operate voting equipment. Although most of the jurisdictions we visited required some poll worker training, election officials cited instances where poll workers who had attended training still either did not understand what they were to do or chose not to follow specific instructions on how to run the polls. For example, to handle unregistered voters in one very large jurisdiction, the poll workers were instructed to provide those voters with questionable credentials a provisional ballot. However, some poll workers failed to follow these rules and turned away some voters from the polling place. Poll worker training in the sites we visited rarely included discussion of the interpersonal skills that poll workers should employ when dealing with frustrated citizens or with each other. Another people-related challenge concerns educating voters about particular processes, such as voter registration and how to operate voting equipment. Jurisdictions place varying degrees of emphasis on educating voters about election processes and procedures. A lack of funds is the primary challenge that election officials said they face in expanding their efforts to educate voters about elections. Further, spending for voter education is considered discretionary. Some local officials must first take care of mandatory items such as equipment, supplies, poll worker salaries and polling places. Many officials said that they see voter education as an area where federal funds could be particularly helpful. On the basis of our mail survey, we estimate that over 38 percent of jurisdictions nationwide believed that the federal government should provide monetary assistance for voter education programs. Voter Education Needed Regarding Processes, Use of Voting Equipment How well a jurisdiction educates voters about election processes and how to use voting equipment can effect how well an election system functions. For example, a number of problems associated with processes, such as requesting and completing absentee ballots and registering to vote, are precipitated by voters failing to provide complete information or to meet deadlines. Voter education can be used to help remedy some of these difficulties. How well jurisdictions educate voters on the use of voting equipment can affect how easy voters find the equipment to use and integrity of the vote. Jurisdictions provide various types of voter education materials to help voters correctly use voting equipment. How frequently the voting equipment counts votes as intended by voters is a function not only of equipment design, but of how well poll workers properly instructed and guided voters, how well voters followed applicable instructions, and what type of assistance was available to help voters who have questions or make mistakes in voting. To illustrate this point, officials from a very large jurisdiction stated that in the November 2000 election 1,500 voters had inserted their punch cards in the recording device upside down, thus causing their votes to be inaccurately recorded. Similarly, at a small jurisdiction that we visited where optical scan equipment was used, officials reported that some voters incorrectly marked the ovals or used a nonreadable pen to mark the ballot, resulting in partially read ballots. In a medium-sized jurisdiction that we visited, voters selected a candidate on the optical scan ballot and then wrote the candidate’s name in the write-in section of the ballot, thus overvoting (making more choices than are permitted per contest) and spoiling the ballot. The election officials stated that they believed that this misunderstanding contributed to the jurisdictions’ almost 5-percent overvote rate. In each of these cases, the way that the voter completed the ballot caused the vote to be recorded inaccurately, even though the voting equipment correctly counted the votes as recorded. A third people-related challenge surfaced during the November 2000 election—making polling places accessible to voters with disabilities or providing alternative voting methods such as curbside voting. The extent to which any given feature may prevent or facilitate access to a polling place is unknown; however, based on our onsite work during the November 2000 election, we estimate that, from the parking area to the voting room, 16 percent of all polling places have no potential impediments, 56 percent have one or more potential impediments but offer curbside voting, and 28 percent have one or more potential impediments and do not offer curbside voting (see fig. 6). These potential impediments would primarily affect individuals with mobility impairments and occur most often on the route from the parking area to the building or at the entrance to the polling place. Inside the voting room, the types and arrangement of voting equipment used may also pose challenges for people with mobility, vision, or dexterity impairments. A number of efforts have been made by states and localities to improve voting accessibility for people with disabilities, such as modifying polling places, acquiring new voting equipment, and providing curbside voting. State and county election officials we surveyed cited a variety of challenges to improving access, including limited availability of accessible facilities and funding constraints at the local level. Some disability advocates believe that alternative voting methods and accommodations should not be viewed as permanent solutions for inaccessible polling places because these remedies do not provide the same opportunity for voting afforded the general public, that is, in a polling place and in private. Election officials confronted process-related challenges that included maintaining accurate voter registration lists, completing and processing absentee ballots, and interpreting voter intent. A number of jurisdictions reported they had trouble maintaining accurate voter registration lists because of the NVRA. This difficulty, in turn, may have exacerbated problems related to qualifying voters at the polls on election day. As the number of voters at home and abroad who cast absentee ballots grows, challenges related to absentee voting are increasing. Election officials reported difficulties with processing millions of absentee ballots cast in the weeks and days before election day and noted the added financial burden of processing these ballots. Military personnel and overseas citizens’ absentee ballots were disqualified at a higher rate than voters at home which presents another challenge. And finally, interpreting improperly marked ballots to determine the voter’s intent was a challenging process because local jurisdictions often lack specific, written guidance and the task itself can be inherently difficult. We estimate that about 46 percent of jurisdictions nationwide had problems with the National Voter Registration Act (NVRA or motor voter) during the November 2000 election. “You can ask any county clerk in the state and they will tell you that the biggest problem is motor voter . Residents can register at the welfare office, the health department, the motor vehicle authorities, and they do, time and again. This results in tons of registrations which are costly and time-consuming to sort through and check against records.” Inaccurate registration lists affect other parts of the election system, especially qualifying voters on election day. Officials reported that voters appeared at the polls on election day claiming to have registered to vote through the motor vehicle authority, but their applications never arrived in the elections office. These individuals were sometimes turned away from the polls. Dealing with voter eligibility issues can be a major problem for some jurisdictions. We estimate that 30 percent of jurisdictions considered dealing with unregistered voters at the polls to be a major problem. All 50 states and the District of Columbia allowed some form of absentee or early voting to increase voter access, convenience, and participation, and the number of American voting absentee is growing. Using Census data, we estimate that for the November 2000 election about 14 percent of voters nationwide cast their ballots before election day.Of these voters, about 73 percent used mail ballots and 27 percent voted in-person, as seen in Figure 7. This represents an increase from the 1996 presidential election in which a total of about 11 percent of voters cast ballots before election day. We estimate that nationwide local election officials received about 14.5 million applications for mail-in absentee ballots (plus or minus 3 million) for the November 2000 election. As more voters, at home and abroad, cast absentee ballots, officials from several local election jurisdictions reported costs and workload involved in reviewing the volume of ballots have grown. Each of the millions of mail-in absentee ballots received by local election officials had to be qualified before being counted. Officials from one very large jurisdiction stated that the sheer volume of mail-in ballots received creates a greater potential for errors. Military and Overseas Citizens’ Absentee Ballots Disqualified at Higher Rates Because military and overseas citizens’ absentee ballots are disqualified at higher rates than those of citizens voting absentee at home, processes for assisting military personnel and overseas citizens need to be improved. Although precise numbers are not available, we estimate that counties having a voting age population of less than 60,000 nationwide disqualified about 8 percent of ballots cast by military and overseas voters. In contrast, the ballot disqualification rate for civilian voters not living overseas was less than 2 percent. While counties having a voting age population of more than 60,000 that responded to GAO's survey showed a similar pattern, the data was insufficient to make a national estimate. The survey showed that for all absentee ballots cast, almost two-thirds of the disqualified absentee ballots were rejected because the ballots arrived too late to be counted or the envelopes or forms accompanying the ballots were not completed properly. The figure below describes the forms that must be completed in order for the mail-in absentee ballot to be qualified. Processes for Assisting Military and Overseas Citizens Need Improvement The Uniformed and Overseas Citizens Absentee Voting Act of 1986 protects the right to vote by absentee ballot in federal elections for more than 6 million military and overseas citizens and recommends that states adopt a number of provisions that facilitate absentee voting by these populations. The Federal Voting Assistance Program, established within the Department of Defense (DOD), is responsible for implementing the act by informing U.S. citizens worldwide about their right to vote, fostering voting participation, and working with states to simplify the registration and absentee voting process. Also, the State Department works with DOD to provide voter assistance to overseas citizens. The extent and quality of federal voter assistance for military personnel and overseas citizens varied considerably in the November 2000 election. While the Federal Voting Assistance Program developed a number of useful tools for voters and some installations GAO visited had well run programs providing assistance and information to potential voters, other installations did not meet DOD and service requirements. The variability in executing the program is due to incomplete service-level guidance that does not reflect DOD's directive, a lack of command support at some installations, and a lack of program oversight by some DOD components. Finally, the State Department provided citizens abroad with a variety of useful assistance, according to overseas citizens and federal employees GAO spoke to, although both groups believed more outreach could be beneficial. Also, State Department Headquarters has not played an active role in sharing best practices and lessons learned or in overseeing the program. We recommend that the Secretaries of Defense and State improve (1) the clarity and completeness of service guidance, (2) voter education and outreach programs, (3) oversight and evaluation of voting assistance efforts, and (4) sharing of best practices. Processes for handling improperly marked ballots present a challenge for many election officials, especially when an election is close. Many states specifically require election officials to count ballots if the “intent of the voter” can be determined. Thirty-one states and the District of Columbia reported to us that they make some determination of voter intent. Voter intent issues arise with paper, optical scan, and punch card ballots, not when the ballots are marked properly for the type of ballot used, but when there are variations from proper marking. During the canvassing stage (when votes are counted and totals calculated), election officials are tasked with reviewing ballots that are not properly marked and sometimes required to determine how those voters intended to cast their votes. After the polls close and ballots are returned to election headquarters, workers canvass the votes, a process that entails reviewing all votes by precinct, resolving problem votes, and counting all valid votes. At this point, workers deal with ballots that are either unclearly or improperly marked. Ballots can be improperly marked in variety of ways that differ according to the type of voting equipment being used in a jurisdiction. Because the DRE and lever machines voters record the vote directly on the equipment rather than a separate ballot, there is no opportunity for a mismarked ballot. Paper, punch card, and optical scan ballots, however, can be improperly marked. For example, on an optical scan ballot voters may have circled a candidate’s name, instead of completing the oval, box, or arrow next to the candidate’s name as illustrated in figure 9. Interpreting a mismarked ballot to determine the voter’s intent can be a challenging process. While states may instruct officials to determine voter’s intent on mismarked ballots, states do not always provide guidance on how to do so. Our work indicates that nationwide about 30 percent of local jurisdictions had no instructions, either from the state or local jurisdiction, on how to interpret voter intent, for example, how to read stray marks on paper ballots or dimples or partially punched chads on punch card ballots. We estimate that about 15 percent of jurisdictions had instructions developed by the jurisdiction and 23 percent had both state and local written guidance. We estimate that nationwide about 32 percent of local election jurisdictions had no written instructions, either from the state or local jurisdiction, on how to interpret voter intent, such as stray marks on ballots or partially punched punch card boxes. Developing processes to interpret a voter’s intent can be challenging, and local jurisdictions vary in how they approach this task. Processes for handling punch card ballots illustrate this point. Jurisdictions we visited reported various ways to handle problem punch card ballots. For example, in one jurisdiction, election officials told us if the punch card ballot contains a dimple with a pinhole, employees put the original ballot over a pink (or duplicate) ballot and held it up to the light. Where they saw light, they punched. The employee also turned over the ballot and looked for bumps, which indicated the voter inserted the ballot backwards. If a ballot contained bumps on the backside, the ballot could be duplicated. In another jurisdiction, a vote on a punch card consisted of any removed chad plus any chad that freely swung by one side. The person scanning the ballot inspected it for improperly punched chads by running the ballot through their fingers. In another jurisdiction, the ballot inspection teams are given a pair of tweezers and told to remove any chads remaining on the punch card. One jurisdiction used persons called “scanners” to go over the ballots before they are counted. Each ballot is inspected for improperly punched chad by running the ballot cards between the scanners’ fingers. Very loose chad will be removed through this process. If the chad does not come off and freely swings by one side, it may be removed. Problem ballots, such as those that are unreadable because of incompletely removed punches or incorrect punches, can alter the counting results or create problems with the computer processing. They are given to “makeover scanners” to be remade. While problems related to voting equipment performance during the November 2000 election received a great deal of media attention, the performance of voting equipment is not only a function of the technology design itself. The people who interact with the technology and the processes governing this interaction can also affect whether voting technology meets the needs of a jurisdiction. As a result, assessing why voting equipment may not meet needs of some jurisdictions can be difficult. Another challenge facing election officials involved obtaining reliable measures and objective data to make informed decisions about whether to invest in new voting equipment or to invest in measures to improve performance of existing equipment, such as maintenance personnel. Local jurisdictions do not always have the information they need to select the most appropriate investment option given their needs and resource constraints. Although 96 percent of local jurisdictions report that they are satisfied with their voting equipment, less than 50 percent of them collect data on how well their equipment performed. This information is vital for jurisdictions considering modernizing their equipment. Another challenge relates to developing and maintaining updated standards for voting equipment. Although the FEC is in the process of updating voting equipment standards issued in 1990, responsibility for establishing, maintaining, and implementing up-to-date standards for voting equipment has not been explicitly assigned. As a result, the 1990 standards have become dated. Understanding a jurisdiction’s voting equipment needs and why voting equipment may not meet those needs can pose a challenge. In assessing whether voting equipment meets the needs of a jurisdiction’s user communities (both the voters and the officials who administer the elections), election officials must have reliable measures and objective performance data. When voting equipment does not meet the needs of a jurisdiction, officials must also understand the cause or causes of the problem before they can choose an appropriate solution, such as more voter education, increased training for election workers, or acquiring new equipment. These causes can be difficult to identify because performance of voting equipment is not only a function of the technology design itself, but also of the people who interact with the technology and the processes governing this interaction. To illustrate this point, our survey of vendors showed little difference among the basic performance characteristics of DRE, optical scan, and punch card equipment. However, when local election jurisdictions’ experiences with the equipment are considered, performance differences among voting equipment became more evident. These differences arise because a real-world setting—such as an election in which equipment is operated by actual voters, poll workers, and technicians—tends to result in performance that differs from that in a controlled setting (such as in the manufacturer’s laboratory). This difference demonstrates the importance of the effect of people and process on equipment performance. While Some Voting Equipment Is Easier to Use, No Clear “Best Performer” Figure 10 shows a relative comparison of certain characteristics— accuracy, ease of use, efficiency, and security—of the various types of voting equipment used in the November 2000 elections. The comparison reflects the results of our analysis of data provided by voting equipment vendors that responded to our survey and survey responses of 513 local election jurisdictions. With appropriate maintenance and proper operation, most equipment performs on par with each other. Some voting technology is easier to use thus eliminating some opportunities for voter error. Overall, our analysis of both the vendor and jurisdiction data showed that DREs are slightly easier to use and slightly more efficient than the other types of equipment. In the area of security, DRE and optical scan are relatively equal, and in the area of accuracy, all equipment is relatively the same. The differences among voting equipment can be attributed, in part, to the differences in the equipment itself. However, they also can be attributed to the people who use the equipment and the rules or processes that govern its use. Further, all voting equipment is influenced by security, testing, maintenance, and cost issues, each of which also involves people and processes. In addition, the accuracy of voting equipment (as measured by how reliably the equipment captures the voter’s intent) can be affected by the processes and procedures that govern how voters interact with the technologies. Differences in these procedures can have noticeable effects on the prevalence of undervotes (votes for fewer choices than permitted, such as not voting for president) and overvotes, for example. In particular, we found that some precinct-count optical scan voting equipment can be programmed to return a voter’s ballot if the ballot is overvoted or undervoted. Such programming allows the voter to make any changes necessary to ensure that the vote is recorded correctly. However, not all states allow this. For example, election officials in one Virginia jurisdiction state that Virginia jurisdictions must accept ballots as cast. Interaction Between People and Technology Affect Uncounted Votes Our analysis showed that the type of voting equipment used (including equipment which allowed for error correction) explained a relatively small percent of the total variation among jurisdictions in uncounted presidential votes. The state in which counties were located had more of an effect on the number of uncounted presidential votes than either a county’s voting equipment or demographic characteristics. Figure 11 shows the results of our analysis. Counties’ demographic characteristics also affected their percentages of uncounted presidential votes. Specifically, counties with higher percentages of minority residents tended to have higher percentages of uncounted presidential votes, which counties with higher percentages of younger and more educated residents tended to have lower percentages of uncounted presidential votes. Counties that used punch card equipment did not generally have higher percentages of minority, less educated, or lower income residents. We found that the state in which counties are located had a greater effect on counties’ percentage of uncounted presidential votes than did counties’ voting equipment or demographic characteristics combined. State differences, which may have included such factors as statewide voter education efforts and state standards for determining that is a valid vote, accounted for 26 percent of the total variation in uncounted presidential votes across counties. County demographic characteristics accounted for 16 percent of the variation. Voting equipment, including the use of optical scan error correction technology, accounted for a total of about 6 percent of the variation in counties’ uncounted presidential votes. The largest percentages of uncounted presidential votes tended to occur in counties that used punch card equipment. Counties that used optical scan equipment with error correction had about 1.1 percentage points fewer uncounted presidential votes than did counties with punch card equipment. The remaining 52 percent of variation was due to unknown factors such as whether a county switched to a new type of voting equipment or the number of inexperienced voters in a county. Looking back to the technology used in the November 2000 elections, our survey of jurisdictions showed that the vast majority of jurisdictions were satisfied with the performance of their respective technologies. However, this satisfaction was in most cases not based on hard data, but on the subjective impressions of election officials. While these impressions should not be discounted, informed decision-making regarding where to make the most appropriate investments, for example, in new equipment, training for election workers, or voter education, requires more objective data. Acquiring new voting equipment is not the only investment option jurisdictions may consider, and, in some cases, may not be the most appropriate solution for jurisdictions who find their voting equipment does not meet their needs. Making wise technology investment decisions present a challenge to our election systems. It is extremely important that election officials be able to define, measure, evaluate voting equipment performance so that they may properly assess whether their current technology is meeting their needs. This information is also important as election officials consider the suitability of available technology options to get the best return on their investment if they choose to modernize their voting equipment. However, we found that about half of the jurisdictions did not collect actual performance data for the voting equipment that they used in the November 2000 election. Table 1 shows the percentage of jurisdictions that collected data on accuracy (which is one measure of performance) by type of voting equipment. Further, it is unclear the extent to which jurisdictions have meaningful performance data. For those local election jurisdictions that we visited that stated that their voting equipment was 100-percent accurate, none was able to provide actual data to substantiate these statements. Similarly, the results of our mail survey indicates that only about 51 percent of jurisdictions nationwide collected data on undervotes, and about 47 percent of jurisdictions nationwide collected data on overvotes for the November 2000 election. Jurisdiction election officials were nevertheless able to provide their perceptions about how the equipment performed. For example, our mail survey results indicated that 96 percent of jurisdictions nationwide were satisfied with the performance of their voting equipment during the November 2000 election. We estimate that 96 percent of jurisdictions nationwide were satisfied with the performance of their voting equipment during the November 2000 election. These perceptions aside, a lack of performance data may limit jurisdictions’ abilities to select the most appropriate voting equipment that gives them the best return on their investment. Thus, without reliable performance data, were federal funds to be made available for the purchase of new voting equipment, over half of the jurisdictions in the U.S. would not be in the best position to make wise investment choices. While no federal agency has been assigned explicit statutory responsibility for developing voting equipment standards, the FEC assumed this role developing voluntary standards in 1990 for computer-based systems, and Congress has supported this role with appropriations. These standards describe specific performance benchmarks and address many—but not all—types of systems requirements. However, these standards have not been maintained and are now out of date (the FEC initiated plans to issue revised standards in 2002). According to FEC officials, the Commission has not proactively maintained the standards because it has not been assigned explicit responsibility to do so. Without current, relevant, and complete voting equipment standards, states may choose not to follow them, resulting in the adoption of disparate standards that could drive up the cost of voting equipment and produce unevenness among states in the capabilities of their respective voting equipment. No federal agency has been assigned responsibility for or assumed the role of testing voting equipment against the federal standards. Instead, NASED, through its Voting Systems Committee, has assumed responsibility for implementing the federal voting equipment standards by accrediting independent test authorities, which in turn, test voting equipment against the standards. To this end, the committee has developed procedures to accredit the independent test authorities. When testing is successfully completed, the independent test authorities notify the NASED that the voting equipment has satisfied testing requirements. As of July 3, 2001, the Association had qualified 21 voting equipment, representing 10 vendors. Because development, maintenance, and implementation of voting equipment standards are very important responsibilities, we are raising matters for congressional consideration regarding the explicit assignment of responsibility in these areas. Additionally, we are making recommendations to the FEC aimed at improving its efforts to update its 1990 voting equipment standards. Our nation’s election systems are complex and intricate. Successful election administration requires the appropriate integration of people, processes and technology. In the challenges that we have categorized as related primarily to this part of the system, it is important to note that each of them is related in some way to another part of the system. Therefore, in considering election reform proposals, it is important to remember that people, processes, and technology issues should not be addressed in isolation. Any reform proposal that influences one part of the system, for example, a process, may have an unforeseen and perhaps undesirable effect on another part of the system, such as the people. Additionally, any problem attributed to people, processes, or technology might actually have its root cause in a different part of the system or be precipitated by a lack of integration among the components of the system. For example, many problems that surfaced in the November 2000 election were attributed to faulty technology, more specifically to punch card machines. While voting technology may need to be modernized, our work showed that any of the types of voting equipment, if used properly, can reliably record a voter’s selections. In most cases, technology was not the dominant factor related to voter error such as mismarked ballots or uncounted votes. Rather, problems were more closely related to voter error included processes that did not allow for poll workers and voters to recognize when errors occurred. In fact, one of the jurisdictions we visited that used punch card machines had a voter error rate of 1.2 percent, which election officials attributed in part to voter education efforts. Greater voter education on voting processes and equipment, rather than the purchase of new voting equipment, can be a more immediate way to resolve issues related to voter error. Our work and the work of others have disclosed a number of challenges to our national election system. As Congress considers if and how it may wish to address these challenges, it may turn to a number of reform proposals put forth by commissions or by proposed legislation. The proposals to date may be grouped into broad categories such as those listed below: providing federal funds for replacing voting equipment, providing federal funds for state or locally determined election creating special postal rates, or requiring no postage, for election creating federal election administration standards, mandatory or updating FEC voluntary voting equipment standards and developing operation standards for voting equipment, developing or improving electronic voter registration systems and statewide information sharing capabilities, reforming absentee or early voting requirements, creating uniform statewide standards for what constitutes a vote and how votes are counted and recounted, and mandating the availability of provisional ballots for all jurisdictions. Variation in specific proposals may occur because of many factors, including the source of the proposal. Proposals may be crafted with various goals in mind—among them enhancing the accessibility, integrity, fairness, consistency, affordability, and sustainability of election systems. While all of these goals are consonant with our democratic traditions, some reform proposals may advance one goal at the expense of the other. For example, some officials promote reforms such as early voting to enhance the accessibility of the electoral process to the general public, while others claim such a move could open the door to voter fraud and thus may come at the price of the integrity of the election system. When reform proposals forward competing goals, the debate over election reform becomes more complex and assessing different reform proposals more difficult. We do not presume to endorse any particular election reform proposal or package, because this is best left to the Congress and other elected officials. However, our review of state and local practices, as well as our analysis of input from state and local officials, suggest criteria that Congress could use as it weighs the merits of reform proposals. Criterion I: The Appropriate Role of the Federal Government in Election Reform. Does the proposed change call for an appropriate federal role in affecting election reform, given the historic balance struck between Congress’ constitutional authority to legislate election administration and some states’ laws and traditions that grant autonomy to local jurisdictions as they administer elections? Criterion II: The Balance Between Accessibility and Integrity. How are the goals of providing citizens broad access to the voting process balanced against the public’s interest in ensuring the integrity of our election systems? Criterion III: Coordination and Integration of People, Processes, and Technology. How does the proposed change affect both the discrete problem it is intended to resolve and the election system as a whole? Criterion IV: The Affordability and Sustainability of Proposed Election Reforms. Have the necessary resources been identified to institute the change and to continually monitor and re-evaluate it over time? We believe using these criteria will help clarify the debate and provide a framework to evaluate the potential effects of various election reform proposals. The following sections further elaborate how Congress might use each of the four criteria. The Appropriate Federal Role in Election Reform A threshold consideration in assessing various proposals and legislation is the appropriate role of the federal government in effecting election reform. Pursuant to its constitutional authority, Congress has periodically enacted legislation that mandates elections be conducted in particular ways. For example, Congress has prohibited discrimination based on certain voter characteristics, such as race or age for both state and federal elections. In addition, Congress has broad authority to establish requirements for congressional elections that are binding on the states. As a practical matter, such requirements may also affect state and local elections held in conjunction with elections to federal office. Congress has enacted legislation affecting the timing of federal elections, voter registration, absentee voting for military and overseas civilian citizens of the United States, and voting accessibility for the elderly and the disabled in federal elections. These statutes have basically focused on facilitating the opportunity for voters to participate in the voting process and ensuring fair and equitable treatment of voters. Aside from direct regulation of election administration, Congress may also, in exercising its spending power, encourage state action by attaching conditions to the receipt of federal funds. The scope of congressional authority in election administration is discussed in our March 2001 report. This constitutionally-derived authority notwithstanding, election administration has principally been the responsibility of state and local jurisdictions that conduct elections for local, state, and federal offices. As discussed elsewhere in this report, states and local jurisdictions have determined voter qualifications, types of voting equipment to be used, ballot design, selection of poll workers, and what constitutes a vote. This historical balance between Congress’ constitutional authority to prescribe change and the states’ and localities’ traditional roles in defining the terms of election administration raises fundamental policy issues that must be confronted in the debate about the federal government’s efforts to pursue election reform. Various reform proposals offered to date differ in the role envisioned for the federal government. These can be categorized into essentially four distinct options for federal action that fall along a continuum of low- to high- federal involvement. The first option, falling on the low end of the continuum, calls for the federal government to provide information, guidance, and encouragement to states and local jurisdictions to take action in specific areas. At the second point on the continuum, a reform proposal may envision a more involved federal role that calls for the federal government to provide funds to states and localities to improve election administration, allowing each jurisdiction to use the funds where it believes they are most needed. At the third point on the continuum, proposals may go a step further suggesting that the federal government provide funds, contingent on states and local jurisdictions taking specific actions or achieving specific results. In this role the federal government uses a “carrot” to encourage a desired behavior by states and local jurisdictions. However, the states and local jurisdictions still have the ability to opt for the status quo by refusing the federal funds. At the fourth point on the continuum, where election reform would involve the greatest use of federal authority, the federal government would mandate that state and local jurisdictions take specific actions or achieve specific results--with or without accompanying funding. Historically, this option has been used when the federal government wishes to guarantee a voter right or protection. By way of illustration, consider how Congress would reach different conclusions on the issue of replacing existing voting equipment given how it might use its authority. Many recommendations from recently completed studies and congressional legislative proposals call for providing federal assistance to state and local election jurisdictions for replacing voting equipment. Under the first option, Congress could require the FEC to act as a clearinghouse to gather and disseminate information and to sponsor research on the various types of voting equipment. The federal government would provide information, assistance, and advice that would make it easier for that state and local election jurisdictions to examine their choice of voting equipment in light of national data on specific practices or issues. This approach provides a minimal federal role in affecting change, leaving the greatest discretion and control to states and local election jurisdictions. This federal role would not entail disbursing federal funds to support purchase and installation of new technologies, leaving acquisition decisions up to local jurisdictions. Under the second option, the federal government could create a grant program that would make federal funding available to states to support purchase and installation of new voting equipment. Funds would be provided with no “strings” attached regarding which type of equipment the state could buy. Under the third option, the federal government could create a similar grant program to that in the second option, except that strings would be attached. For example, funds would only be provided for states to buy equipment that meets federal standards, or only for certain types of equipment (e.g., precinct-based optical scanners). This further limits local jurisdictions’ discretion in choosing appropriate equipment and allows states to opt out of the program. Under the fourth option, the federal government could mandate that only certain types of voting equipment could be used in federal elections. Congress might or might not provide funding to enable states without this type of equipment to purchase and install it. Either way, jurisdictions would have no choice but to comply with the law and acquire that voting equipment, regardless of whether this choice best meets local needs. This option provides the least discretion. Because federal elections are usually conducted in conjunction with state and local elections, congressional mandates regarding the conduct of federal elections would likely involve changes in many, if not all, state and local elections. In addition to the appropriate federal role in election reform, the degree to which reform proposals may affect the accessibility and integrity of an election system, our work suggests, is an important criterion for Congress to use in assessing the effect of reform proposals on our election systems. Some proposals may seek to increase the general public’s access to the election system. Accessibility describes the degree to which an election system promotes inclusiveness, thus making it as easy as possible for the general population to register to vote and to cast their votes. For example, reform proposals that attempt to (1) make voter registration less cumbersome, (2) give voters more opportunity to cast absentee or early ballots, or (3) provide voting equipment that all voters can use with ease can be considered as affecting an election system’s accessibility. Other proposals may attempt to increase the system’s integrity, that is, the degree to which the system is impervious to voter fraud. For instance, proposals may implement controls to ensure that (1) voters present identification or proof of eligibility at the polls on election day or (2) all eligible votes are counted can be said to affect an election system’s integrity. The goal of making the election system more accessible to voters can run at cross purposes with the goal of ensuring the election system’s integrity. This tension suggests decision makers should ask how reform proposals balance the goals of providing all citizens broad access to the voting process against the public interest in ensuring the integrity of the vote. Most election reform proposals address one or both of these concepts to some degree, with some placing more emphasis on one or the other. The weight individual policymakers may place on different concerns could vary, depending on how they value different attributes. For example, increasing the opportunity to use absentee ballots may improve access to the vote, but it also might negate the possibility of using some of the controls that may be used at a polling place to assure voter identification and eligibility. If increasing access to the vote is deemed more important than ensuring the presence of rigorous controls, then reform proposals emphasizing accessibility considerations might be preferred. In the past when Congress has taken action to change the election system, it considered both accessibility and integrity issues. Constitutional amendments and federal legislation affecting the election process opened access to those whose access was either denied or circumscribed—e.g., African-Americans, women, language minorities, people with disabilities. All of these reforms assumed the existence of controls to ensure that only those who were otherwise eligible among these groups would be able to register to vote and cast their ballots. The most recent federal statute affecting the election process—NVRA, or motor voter act—specifically recognized the dual goals of access and integrity. The act established registration procedures designed to “increase the number of eligible citizens who register to vote in elections for Federal office,” without compromising “the integrity of the electoral process” or the maintenance of “accurate and current voter registration rolls.” Creating a proper balance between accessibility and integrity is sometimes difficult, as seen in the following examples which illustrate the inherent tension between these competing goals: Requiring that citizens who register to vote present a form of picture identification with their residence’s address provides some proof of identify and some assurance that the person resides in the voting jurisdiction and is therefore eligible to vote in the jurisdiction in which he or she is registering. However, this procedure makes it more difficult for persons to register to vote if they meet registration qualifications but do not have a driver’s license or other picture identification indicating their place of residence. Providing a provisional ballot to every person who wishes to vote but is not listed in the poll books at the polling place maximizes the opportunity of every person to cast a ballot. The purpose of checking whether these persons are in fact registered to vote prior to counting their ballots is to assure that the vote count will include only ballots cast by eligible voters. Election officials may go to nursing homes to review absentee ballot requests, and accept ballots cast by nursing home residents while the officials are present to supervise the voting process. In effect, the nursing home becomes an unofficial polling place. Residents are provided a greater opportunity to vote but with better controls in place to address the potential that the absentee ballot might be voted by someone other than the registered voter. Allowing remote voting via the Internet may improve some voters’ opportunities to cast a ballot. Although this method is in an experimental stage, unresolved questions about its impact on the integrity of election systems remain. As with other forms of remote voting, such as absentee voting by mail, there is a need to ensure that only eligible voters cast ballots, voter privacy is protected, and voters are not subject to coercion. Integration of People, Processes and Technology As Congress assesses various reform proposals, it may consider both reforms that address a discrete problem and that address the election system more broadly. Congress may also be asked to choose among proposals that address specific parts of a perceived problem or address perceived problems in a more systematic fashion. Effective election administration requires the appropriate coordination and integration of people, processes, and technology. For example, successfully registering a new voter, whether the person registers by mail, at the Department of Motor Vehicles, or at the registrar’s office, involves the coordination and integration of (1) voters and registration workers who know and follow the registration process, including obtaining the information required to register successfully; (2) the process for registering new voters that guides election workers as they supply the correct forms to voters, compile and update voter information, and notify voters of their registration status; and (3) a computer system or other means of creating and updating a voter registration list to assure an accurate, current list of registered voters. Shortcomings in any of three areas can affect the ability of persons to register successfully and the accuracy of the registration rolls. To illustrate the difference between approaching reforms from a discrete versus system-wide perspective, consider election reform proposals that recommend Congress provide assistance to states and localities to purchase new voting equipment in order to reduce voter error. Some of these proposals approach voter error as if it were precipitated by a single cause, such as the type of voting equipment used. However, introducing new technology alone may not necessarily reduce voter error. In fact, switching equipment actually may introduce new opportunities for voter error unless the jurisdiction deals with the people aspects of successfully fielding new voting technology and offers voter education on how to use the new equipment effectively. Moreover, successful implementation must include processes for dealing with machine failure, ensuring that the equipment is programmed properly to accurately count the votes for each office on the ballot and that the ballots and machines are secure. Failure to consider the interaction of people, processes, and technology in fielding new voting equipment may result in increased voter or counting error, rather than a decrease. As a second example, some proposals suggest a change in process that establishes standard voting hours such that all polling places across the nation are opened and closed simultaneously, regardless of time zone. With this proposal, voters in every part of the country would cast their votes while the outcome of the election is still unknown, thus negating any influence that media reports of election results may have. Although voter participation might increase as a result, this proposal might also have unintended consequences for other parts of the election system. For example, keeping polling places open at earlier or later hours may increase the burden placed on poll workers. Some elections officials currently report difficulties in securing poll workers who are willing to work 15 to 18 hour days, a situation that might be exacerbated if poll workers were asked to work even earlier or later. Affordability and Sustainability of Proposed Election Reforms The implementation of election reforms will likely increase the overall cost of our nation’s election systems. Choosing among election reform proposals, therefore, should include a careful assessment of the affordability and sustainability of the reform as well as who is expected to shoulder the costs. Simply making funding available to state and local governments to implement a reform without consideration of whether all associated lifecycle costs have been considered or how the reform is to be sustained could result in having to revisit reform issues. Historically, incurring the costs of election administration and equipment has largely fallen to local jurisdictions, with some support provided by the state governments. Elections have been conducted over the years with relatively small budgets, and election officials consistently find themselves competing for funds with other local government priorities. Our work confirmed that election administration is not usually at the top of state and local funding priorities. As an official in a large election jurisdiction told us, election administration is often number 11 in the top 10 priorities of local government budgets. As a result, realistic reform proposals are those that not only identify solutions to the issues at hand, but are also affordable and sustainable with achievable financial commitments for the federal, state, and local government stakeholders. Along these lines, as Congress assesses reform proposals, it should consider three factors related to affordability and sustainability. The first factor is the initial outlay required to fully implement the proposal. In this regard, the assessment should consider whether the initial outlay for the proposed reform would be affordable to the state and localities, including all associated and transition costs (e.g., training workers and voters to use new equipment and any changes to voting processes necessitated by new equipment). For example, were Congress to implement a proposal that requires all states to develop a state-wide voter registration system, some states might find themselves unable to comply unless federal funding sources were forthcoming. States with budgetary pressures that prevented them from quickly implementing the federal requirement could be even less likely to supply the funding to comply with the federal requirements. In addition, the costs to the other components of election administration (e.g., the cost of training election workers to use the new voting technology) should also be considered. The second factor is whether the federal government and/or state and local jurisdictions could afford the long-term costs of sustaining the proposed reform over time. Reform proposals that provide funding for purchasing new technology could enable some jurisdictions to upgrade their voting equipment. To the extent that a local jurisdiction could sustain the funding needed to continue its use, the performance of voting equipment would be improved. For example, if the federal government were to make funds available to purchase different voting technology (e.g., replacing punch cards with electronic equipment), it should have some assurance that the additional resources necessary to sustain the reform (e.g., software, programming capability, vendor support, and updates) would be available. However, not all jurisdictions are in a position to make that commitment. The third factor is assigning responsibility for costs, and whether all levels of government could commit to implement and sustain the reform. As mentioned above, it is doubtful that every local jurisdiction could alone commit the resources necessary to fund many of the reforms envisioned in several proposals. However, they might be in a position to fund some of them, thereby making a commitment to the success of the reform. On the other hand, the question arises as to how much of a federal or state presence in local election administration is perceived as desirable or financially possible. Events surrounding the November 2000 election brought into question the integrity of our nation’s election systems. Although not all states and jurisdictions reported experiencing major problems during the November 2000 election, important concerns were raised in most jurisdictions related to each stage of the election process—registration, absentee and early voting, preparing for and conducting election day activities, and vote tabulations. Congress has the opportunity to address these challenges now, to avoid similar problems in the future. However, addressing these challenges involves complex considerations and even more difficult choices when considering the range of proposals for election reform. Accordingly, we have offered four criteria against which any election reform proposals may be measured. These may not be the criteria that every analyst would suggest, and each policymaker would not give the same weight to each criterion. However, if election system reform proposals were to be evaluated as to the (1) appropriate federal role in election reform; (2) balance between accessibility and integrity; (3) integration of people, process, and technology; and (4) affordability and sustainability of election reforms, Congress would have a good foundation for devising sustainable solutions that will meet the needs of future generations of U.S. citizens. This section summarizes the major issues contained in the other six reports that we prepared on our nation’s election systems. Table 2 lists the issues that we addressed in our elections work and the reports that discuss them in further detail. Collectively, our extensive research shows that election systems vary widely across states and jurisdictions. It also shows that federal, state, and local governments face daunting, often long-standing challenges. In the following sections we summarize our findings and insights from each of our reports. This report describes Congress’ constitutional authority to regulate congressional, presidential, and state and local elections and identify major federal statutes enacted in the area of election administration. Under the Constitution, states are responsible for the administration of both their own and federal elections. Accordingly, states and localities incur the costs associated with these activities. Notwithstanding the state role in administering elections, Congress has authority to affect the administration of elections in certain ways. Congressional authority to legislate in this area derives from various constitutional sources, depending upon the type of election. With regard to the administration of federal elections, Congress has constitutional authority over both congressional and presidential elections. Congress’ authority to regulate congressional elections derives primarily from Article I, Section 4, Clause 1 of the Constitution (known as the Elections Clause). The Elections Clause provides that the states will prescribe the “Times, Places and Manner” of congressional elections, and that Congress may “make or alter” the states’ regulations at any time, except as to the places of choosing Senators. The courts have held that the Elections Clause grants Congress broad authority to override state regulations in this area. Therefore, while the Elections Clause contemplates both state and federal authority to regulate congressional elections, Congress’ authority is paramount to that of the states. With respect to presidential elections, the text of the Constitution is more limited. Specifically, Article II, Section 1, Clause 4, provides that “Congress may determine the Time of chusing the Electors, and the Day on which they shall give their Votes; which Day shall be the same throughout the United States.” Despite this limited language, the Supreme Court and federal appellate courts have upheld certain federal statutory provisions regulating presidential elections that go beyond regulating the “time” of choosing the electors. However, because federal legislation that relates solely to the administration of presidential elections has been fairly limited, case law on this subject has been sparse. Consequently, the precise parameters of Congress’ authority to pass legislation relating to presidential elections have not been clearly established. With regard to state and local elections, although Congress does not have general constitutional authority to legislate regarding these elections, a number of constitutional amendments authorize Congress to enforce prohibitions against specific discriminatory practices in state and local elections, such as discrimination on the basis of race or color, in all elections—federal, state, and local. Historically, Congress has passed legislation related to the administration of both federal and state elections in several major functional areas of the voting process, including: (1) timing of federal elections; (2) voter registration; (3) absentee voting requirements; (4) accessibility provisions for elderly and disabled voters; and (5) prohibitions against discriminatory voting practices. In general, the purpose of these federal statutes has been either to prohibit discrimination on the basis of specific voter characteristics or make it easier for citizens to register to vote. Our report on the administration of the 2000 elections presents the results of our review of aspects of elections in the United States. Specifically, we (1) describe elections in the United States and the activities and challenges associated with each of the four major stages of election administration— voter registration, absentee and early voting, preparing for and conducting election day activities, and vote tabulation; (2) identify the types of voting methods used, their distribution in the United States, and any associated challenges; assess such characteristics of voting equipment as accuracy, ease of use, efficiency, security, and cost; and estimate the cost of replacing existing voting equipment in the United States with either optical scan or electronic voting equipment; and (3) identify issues and challenges associated with the use of the Internet for voting. Although registration is a prerequisite to voting in nearly all states, we found that different citizens with the same qualifications would be eligible to vote in some states but not in others because of variations in voter eligibility requirements. A citizen’s access to voting is primarily based on the appearance of his or her name on a voter registration list, which is developed from registration applications and compiled and maintained by election officials using various technologies and information sources. Election officials nationwide expressed varying degrees of confidence in the accuracy of their voter registration lists; however, information about list accuracy and currency, as well as the extent of error, was difficult to obtain. Among the challenges identified were processing applications submitted through sources other than elections offices, such as state motor vehicle authorities; obtaining accurate and timely information from numerous sources to update voter registration lists; and using technology to help process applications and compile registration lists. All states allowed some form of absentee or early voting in the November 2000 election; using U.S. Census data, we estimate that for the November 2000 election, about 14 percent of voters nationwide cast their ballots before election day, three-fourths of them using mail-in ballots and one- fourth voting in person. However, we found that no national data are currently maintained on the number of mail-in absentee ballots disqualified. Differences in requirements, administration, and procedures resulted in citizens having different opportunities for obtaining and successfully casting absentee ballots. For example, the likelihood of a ballot being disqualified due to voters’ errors in completing and returning mail-in absentee ballots varied even, in some instances, among jurisdictions in the same state. Among the challenges local election officials face are deciding whether and how to process incomplete and late mail-in absentee applications and ballots; processing large numbers of mail-in absentee applications and ballots in a timely manner; and obtaining adequate staffing, ballots, and locations for conducting early voting. Election officials across the country, with some variation, performed similar duties to prepare for and conduct the November 2000 election. Our survey indicated that 57 percent of voting jurisdictions nationwide encountered major problems in conducting the 2000 election. While jurisdictions did not experience the same problems, more than half cited problems with recruiting enough qualified poll workers. However, because few jurisdictions systematically collected information on how they administer elections, what they consider to be major problems may be based on anecdotal information and limited analysis. From the perspective of election officials, a major election day challenge is resolving questions about voter eligibility. Large numbers of ineligible voters can create long lines, voter frustration, and problems for poll workers. Many eligibility issues stem from the reliability of voter registration lists. Counting votes is not a simple task. It involves counting votes cast before and on election day and may be carried out at the precinct, at a central location, by hand, or by some type of counting equipment. Vote counting problems are highlighted when the election results are close. A ballot may not be counted when a voter overvotes—marks for two candidates--or when the ballot cannot be read by the counting equipment. What constitutes a proper mark on a ballot differs based on the type of voting method used. According to our survey of state election directors, 31 states and the District of Columbia have a state law or other provision specifying what is a proper ballot marking for each voting method, but state guidance also varied from general to specific. Forty-seven states and the District of Columbia have laws with provisions for recount, but they vary. For example, 17 states provide for a mandatory recount, but two of these require a tie vote and another requires a 1 percent or 200 vote difference. According to officials in 42 of the jurisdictions that had recounts in the 2000 election, none of the recounts altered the original election outcome. Counting ballots posed several challenges for election officials, including counting only votes cast by eligible voters; interpreting variations when ballots are not properly marked; and completing the results of a recount in a close or contested election in a fair, accurate, and timely manner. Four types of voting equipment—punch card, optical scan, lever, and DRE—were used in 98 percent of all election jurisdictions in 2000. While a survey of vendors showed little performance difference among DRE, optical scan, and punch card equipment, local election officials we contacted rated DRE as more easy to use than other voting methods. Only about 50 percent of jurisdictions collected data on accuracy and few of the jurisdictions we visited had collected actual performance data on the voting equipment used in the 2000 election. Nevertheless the vast majority of jurisdictions across the nation were satisfied with their respective voting equipment, based largely on officials’ perceptions of how their equipment performed. The cost to replace existing voting equipment depends on the type of equipment purchased and the number of jurisdictions for which it is purchased. We estimated the cost of purchasing optical scan units, not including certain software costs, could range from $191 million for optical scan machines that use a central-count unit to about $1.3 billion for optical scan equipment that counts ballots at the precinct and thus allows for voter error correction. We estimated the cost of purchasing touchscreen DRE units to be about $3 billion (including one DRE touchscreen unit per precinct equipped for voters with disabilities and one central count optical scan unit per county for absentee ballots). Among the voting equipment challenges identified by election officials were having reliable measures and objective data to know whether the technology used is meeting the jurisdictions’ needs; ensuring that necessary security, testing, and maintenance activities are performed; and ensuring that the technology will provide benefits over its useful life commensurate with life cycle costs and that these collective costs are affordable and sustainable. Our review identified three kinds of internet voting--at a polling place; in a voting “kiosk” at public places, such as malls or libraries; or at any location, including the voter’s workplace or home through a personal computer. Although opinion is not unanimous, security is seen as the primary challenge for Internet voting. The cost effectiveness of Internet voting remains unclear because reliable cost data are not available. The broad application of Internet voting presents several social and technological challenges, including providing adequate voter privacy safeguards, security for voting equipment to ensure that it is adequately safeguarded, and equal access to all voters. This report examines state and local provisions and practices for ensuring voting accessibility, both at polling places and with respect to alternative voting methods and accommodations; estimates the proportion of polling places with features that might facilitate or impede access, including features of polling booths and voting accommodations; and identifies efforts and challenges to improving accessibility. All states have provisions (in the form of statutes, regulations, or policies) that specifically address voting by people with disabilities. However, consistent with the broad discretion afforded states, these provisions vary greatly. State laws and policies also vary on how counties are to assure accessibility of polling places. Our survey of counties confirms that most counties inspect all polling places for accessibility, although county practices for ensuring accessibility vary. All states provide for one or more alternative voting methods or accommodations that may facilitate voting by people with disabilities whose assigned polling places are inaccessible. For example, all states have provisions allowing voters with disabilities to vote absentee without notary or medical certification requirements, although the deadlines and methods (for example, by mail or in person) for absentee voting vary among states. In addition, many states, but not all, have laws or policies that provide for other accommodations and alternatives for voting on or before election day—such as reassignment to a polling place that is accessible, curbside voting, or early voting. Our onsite work on election day 2000 found that polling places are generally located in schools, libraries, churches, and town halls, as well as other facilities. Although the extent to which any given feature may prevent or facilitate access is unknown, we estimate that, from the parking area to the voting room, 16 percent of all polling places in the contiguous United States have no potential impediments, 56 percent have one or more potential impediments but offer curbside voting, and 28 percent have one or more potential impediments and do not offer curbside voting. These potential impediments would primarily affect individuals with mobility impairments and occur most often on the route from the parking area to the building or at the entrance to the polling place. Inside the voting room, the types and arrangement of voting equipment used may also pose challenges for people with mobility, vision, or dexterity impairments. To facilitate voting inside the voting room, polling places generally provide accommodations, such as voter assistance, magnifying devices, and voting instructions or sample ballots in large print. However, none of the polling places that we visited had special ballots or voting equipment adapted for blind voters. A number of efforts have been made by states and localities to improve voting accessibility for people with disabilities, such as modifying polling places, acquiring new voting equipment, and expanding voting options. Nevertheless, state and county election officials we surveyed cited a variety of challenges to improving access, including limited availability of accessible facilities and funding constraints at the local level. Some disability advocates believe that although alternative voting methods and accommodations, such as curbside voting, expand options for voters with disabilities, they do not provide the same voting opportunities afforded the general public (that is, the opportunity to vote independently and privately at a polling place) and should not be viewed as permanent solutions for inaccessible polling places. This report describes the extent and quality of voter assistance provided for uniformed and overseas citizens; the challenges that state and local requirements may pose to these voters; and the extent of and reasons for disqualification of ballots cast by these voters. The Uniformed and Overseas Citizens Absentee Voting Act of 1986 protects the right to vote by absentee ballot in federal elections for more than 6 million military and overseas citizens. The act also recommends that states adopt a number of provisions that facilitate absentee voting by these populations. The Federal Voting Assistance Program, established within the Department of Defense (DOD), is responsible for implementing the act by informing U.S. citizens worldwide about their right to vote, fostering voting participation, and working with states to simplify the registration and absentee voting process. Voter education and assistance efforts for military personnel are largely implemented by the military services through Voting Assistance Officers. Also, the State Department works with DOD to provide voter assistance to overseas citizens. The extent and quality of federal voter assistance for military personnel and overseas citizens varied considerably for the 2000 general election. The Federal Voting Assistance Program developed a number of useful tools for voters and Voting Assistance Officers, but many potential voters we spoke to were unaware of them. While some installations we visited had well run programs providing assistance and information to potential voters, other installations did not meet DOD and service requirements because they did not provide sufficient numbers of trained Voting Assistance Officers, voter training, and voting materials. The variability in executing the program is due to incomplete service-level guidance that does not reflect DOD’s directive, a lack of command support at some installations, and a lack of program oversight by some DOD components. Finally, the State Department provided citizens abroad with a variety of useful assistance, according to overseas citizens and federal employees we spoke to, although both groups believed more outreach could be beneficial. Also, State Department Headquarters has not played an active role in sharing best practices and lessons learned or in overseeing the program. Despite progress made by states to facilitate absentee voting, many military and overseas voters we spoke to believe that challenges remain, including helping voters understand and comply with state requirements and local procedures for absentee voting, such as deadlines for registering and returning ballots. Continued efforts by DOD officials to work with the states to simplify procedures, modify election schedules, or allow more use of technology, such as faxing and the Internet, to speed some portions of the voting process may help alleviate the challenges, but state legislative actions may be required. Although precise numbers are not available, we estimate that small counties (having a voting-age population of less than 60,000) nationwide disqualified 8.1 percent (plus or minus 3.2 percent) of ballots cast by military and overseas voters. In contrast, the ballot disqualification rate for civilian voters not living overseas was 1.8 percent (plus or minus 0.6 percent). While larger counties (having a voting age population of more than 60,000) that responded to GAO's survey showed a similar pattern, the data were insufficient to make a national estimate. The survey showed that for all absentee ballots cast, almost two-thirds of the disqualified absentee ballots were rejected because the ballots arrived too late to be counted or the envelopes or forms accompanying the ballots were not completed properly. This report includes recommendations to the Secretaries of Defense and State to improve (1) the clarity and completeness of service guidance, (2) voter education and outreach programs, (3) oversight and evaluation of voting assistance efforts, and (4) sharing of best practices. This report identifies the Federal Election Commission’s (FEC) role regarding various voting equipment and assesses how well the FEC is fulfilling its role. Our work also identifies the National Association of State Election Directors’ (NASED) process for testing and qualifying voting equipment. No federal agency has been assigned explicit statutory responsibility for developing voting equipment standards; however, the FEC assumed this role by developing voluntary standards in 1990 for computer-based equipment, and Congress has supported this role with appropriations. These standards describe specific performance benchmarks, and address many—but not all—types of systems requirements. In 1997, the FEC initiated efforts to evaluate the 1990 standards to identify areas to be updated, and in 1999, initiated efforts to update the standards. The FEC plans to issue revised standards in 2002. This update is necessary because the FEC has not proactively maintained them, thus allowing them to become out of date. According to FEC officials, the FEC has not proactively maintained the standards because it has not been assigned explicit responsibility to do so. Unless voting equipment standards are current, relevant, and complete, states may choose not to follow them, resulting in the adoption of disparate standards that could drive up the cost of voting equipment and produce unevenness among states in the capabilities of their respective voting equipment. No federal agency has been assigned responsibility for or assumed the role of testing voting equipment against the federal standards. Instead, NASED, through its Voting Systems Committee, has assumed responsibility for implementing the federal voting equipment standards by accrediting independent test authorities, which in turn, test voting equipment against the standards. To this end, the committee has developed procedures to accredit the independent test authorities. According to the test authorities, testing is generally iterative, in which the voting equipment vendors are provided an opportunity to correct deficiencies identified during testing and resubmit the modified voting equipment for retesting. When testing is successfully completed, the independent test authorities notify the NASED that the voting equipment has satisfied testing requirements. As of July 3, 2001, the NASED had qualified 21 different types of voting equipment, representing 10 vendors. Because development, maintenance, and implementation of voting equipment standards are very important responsibilities, we are raising matters for congressional consideration regarding the explicit assignment of responsibility in these areas. Additionally, we are making recommendations to the FEC's Commissioners aimed at improving its efforts to update its 1990 voting equipment standards. Our analysis of data from 2,455 counties shows that the type of voting equipment that counties used in the 2000 general election had an effect on uncounted presidential votes. Specifically, counties that used punch card equipment had roughly 0.6 percentage points higher percentages of uncounted presidential votes than counties using electronic, paper, or optical scan voting equipment. Counties using lever equipment had 0.7 percentage points lower percentages of uncounted presidential votes than counties using electronic, paper or optical scan voting equipment. When we supplement this analysis with information about the performance of optical scan equipment with error correction from our sample of 404 counties, we found that counties using punch card equipment had significantly higher percentages of uncounted presidential votes than counties using error corrected optical scan equipment. If we apply the relationship we found in these 404 counties to the larger set of 2,455 counties, an estimated 300,000 additional votes may have been counted if counties that used punch card equipment had, instead, used optical scan equipment with error correction. Overall, county voting equipment accounted for 2% of the variation in uncounted presidential votes across counties. Additionally, the analysis of the subset of 404 counties showed that the use of error correction accounts for another 4% of variation in uncounted presidential votes across counties. We found that counties’ demographic characteristics accounted for about 16 percent of the total variation in uncounted presidential votes. Counties with higher percentages of minority residents were more likely to have higher percentages of uncounted presidential votes. Counties with higher percentages of 18- to –24 year olds and higher education were more likely to have lower percentages of uncounted presidential votes. The state in which counties are located accounted for about 26 percent of the total variation in uncounted presidential votes. Data were not available to examine the extent to which specific factors that were common to counties within a state but varied across states affected uncounted presidential votes. However, such factors may include statewide voter education efforts, the number of candidates on the ballot, the extent to which absentee or early voting occurred, and the state’s standards for determining that is a valid vote. Non-election specific factors, such as the percentage of the state’s population for which English is a second language, may have also contributed to the variability in uncounted presidential votes. Our statistical models left about half of the variation in uncounted presidential votes unexplained. Several factors may have contributed to this remaining variability, including differences among counties, precincts, and people. An example of this type of difference is whether a county had switched to a new type of voting equipment that voters found difficult to operate. Our findings, which are based on aggregate statistics and only those data that were available for our sample of 2,455 counties, and the subset of 404 counties, have methodological limitations that are inherent to statistical studies of this type. Elections: Perspectives on Activities and Challenges Across the Nation (GAO-02-03, Oct. 2001). Elections: Status and Use of Federal Voting Equipment Standards (GAO- 02-52, Oct. 2001). Voters with Disabilities: Access to Polling Places and Alternative Voting Methods (forthcoming). Elections: Voting Assistance to Military and Overseas Citizens Should be Improved (GAO-01-1026, Sept. 2001). Elections: Statistical Analysis of Factors that Affected Uncounted Votes in the 2000 Presidential Election (GAO-02-122, Oct. 2001). Elections: The Scope of Congressional Authority in Election Administration (GAO-01-470, Mar. 2001). Bilingual Voting Assistance: Assistance Provided and Costs (GAO/GGD- 97-81, May 1997). The Constitution Project, Building Consensus for Election Reform (Aug. 2001). National Association of Secretaries of State, State-by-State Election Reform Best Practices Report (Aug. 2001). The National Commission on Federal Election Reform, To Assure Pride and Confidence in the Electoral Process (July 2001). National Conference of State Legislatures, Voting in America: Final Report of the NCSL Elections Reform Task Force (Aug. 2001). National Task Force on Election Reform, Election 2000: Review and Recommendations by The Nation's Elections Administrators (Aug. 2001) Caltech-MIT Voting Technology Project, Voting: What Is, What Could Be (July 2001). (alphabetical order by state) The Governor’s Select Task Force on Election Procedures, Standards, and Technology, Revitalizing Democracy in Florida (Mar. 2001). Hon. Cathy Cox, Georgia Secretary of State, The 2000 Election: A Wake-Up Call For Reform and Change - Report to the Governor and Members of the General Assembly (Jan. 2001). Hon. Chet Culver, Iowa Secretary of State, Commissioner of Elections, and Registrar of Voters, Iowa’s Election 2000: Facts, Findings, and Our Future (Mar. 2001). Hon. Ron Thornburgh, Kansas Secretary of State, Kansas Secretary of State’s Six-Point Election Improvement Plan (Jan. 2001). Special Committee on Voting Systems and Elections Procedures in Maryland, Report and Recommendations (Feb. 2001). Hon. Matt Blunt, Missouri Secretary of State, Making Every Vote Count: Report of Secretary of State Matt Blunt to the People of Missouri (Jan. 2001). Hon. Bob Brown, Montana Secretary of State, 2001 Election Reform Plan (2001). Office of New York Attorney General, Eliot Spitzer, Voting Matters in New York: Participation, Choice, Action, Integrity (Feb. 2001). Hon. Henry Cellar, Texas Secretary of State, Texas Overvote/Undervote Study (Jan. 2001). Secretary of State Deborah L. Markowitz, Review of Vermont's Election Administration and Proposals for Improvement (Jan. 2001). | As a result of events surrounding the November 2000 presidential election, public officials and various interest groups have proposed reforms to address perceived shortcomings of various election systems. The complexity and intricacy of the American electoral system suggests that the success of an election system depends on the appropriate integration of people, processes, and technology. This report presents an analytical framework that Congress could use as it weighs the merits of various reform proposals. |
Backup aircraft account for about 35 percent of the Air Force’s and Navy/Marine Corps’ fighter/attack aircraft inventory. Operations and maintenance funds appropriated to support these aircraft are allocated based on the number of combat-designated aircraft, and the test and evaluation, and training aircraft in the backup force. There is no additional allocation for maintenance and attrition aircraft in the backup force. Those backup aircraft are operated and maintained with the same funds. This affects the budget, because maintenance and attrition backup forces siphon off funds from the combat-designated force. DOD’s October 1993 Bottom-Up Review: Forces for a New Era required the services to reduce and reshape their forces. The Bottom-Up Review specified 20 Air Force wings, 11 Navy air wings, and 4 Marine Corps air wings. DOD’s goals for the services include reducing combat-designated fighter/attack aircraft forces to 2,230 aircraft by 1999, a reduction of 25 percent from 1993 levels. Since 1977, audits by us and DOD have recommended that DOD (1) develop supportable criteria to justify backup aircraft inventories and procurement, (2) reduce the number of these assets, and (3) improve the management and oversight of these aircraft. In 1993, the Chairman, Joint Chiefs of Staff, reported that each service continues to use its own methodology, terminology, and philosophy to determine backup fighter/attack aircraft requirements. The report recommended the services use standard terminology and inventory definitions and thereby help ensure that procurement and maintenance funds be spent only on necessary aircraft. The Federal Managers’ Financial Integrity Act (FMFIA) is a mechanism for reporting material management weaknesses, such as unsupported inventory criteria, to agency heads, Congress, and the President. FMFIA also requires a corrective action plan be devised and milestones established to correct identified problems. By fiscal year 1996, the services’ force structure plans show significant reductions in combat-designated fighter/attack aircraft. These reductions are summarized in table 1 and appendix I. If these reductions are achieved, the ratio of combat-designated aircraft to backup aircraft will not significantly change. The relative number of combat-designated aircraft will increase slightly compared with backup aircraft, from 64.5 percent of the total active force in fiscal year 1993 to 66.5 percent of the total force in fiscal year 1996. Appendix II shows reductions by type of aircraft. Over many years, there has been concern that the services’ criteria for backup fighter/attack aircraft overstate requirements and need to be validated. In most cases, DOD responded that the existing criteria were relevant or that DOD would study the matter. Subsequent studies by others and us have repeatedly found that little has been done to validate the criteria. In 1977, we examined inventories of F-15s and F-14s and found that backup requirements for training, maintenance, and attrition aircraft were overstated. We recommended that Congress require DOD to base its justification for backup aircraft on realistic and supportable data. DOD agreed and responded that a review was underway to validate the requirements.In 1983, we again questioned criteria used by the services to justify backup F-14, F-15, F-16, and F/A-18 training, maintenance, and attrition aircraft. Further, we reported DOD had not initiated a review to validate the criteria. In 1992, the Naval Audit Service reported that the Navy had overstated the need for F-14 training aircraft. In 1993, the Chairman, Joint Chiefs of Staff, reported that the services’ requirements for combat-designated and backup aircraft were inconsistent, outdated, and in need of revision. See appendix III for a list and discussion of our previous audits and DOD audits of backup aircraft inventories and criteria. Despite recommendations to validate backup aircraft criteria, the Air Force continues to use unvalidated criteria. The Navy/Marine Corps has made progress toward justifying the number of aircraft needed to support the combat-designated force. The Air Force and the Navy/Marine Corps used standard planning factors or percentages to determine the number of backup aircraft required to support the combat force. More recently, the Navy/Marine Corps has used student volume, flying hour requirements, and aircraft utilization rates to determine the need for training backup aircraft, and a Test and Evaluation Master Plan to determine the need for test and evaluation backup aircraft. Table 2 summarizes the Air Force’s and the Navy/Marine Corps’ planning factors used to determine the need for backup aircraft. The Air Force plans to spend over $72 billion to procure 442 F-22 fighter/attack aircraft (4 fighter wing equivalents): 288 combat-designated aircraft and 154 backup aircraft. Table 3 shows the breakout of backup F-22 aircraft given (1) backup aircraft required using current Air Force backup aircraft criteria and (2) the procurement plan. If the F-22 experiences the same attrition rate as the F-15, the Air Force will be able to sustain four fighter wing equivalents for 12.5 years with a force of 36 attrition aircraft. Conversely, if the F-22 experiences one-half the attrition rate of the F-15, the Air Force will be able to sustain four fighter wing equivalents for 25 years with a force of 36 attrition aircraft. DOD plans to spend $89 billion to procure 1,000 F/A-18E/F aircraft. The Navy’s planned inventory distribution for the F/A-18E/F would continue to increase the relative number of fighter/attack aircraft used for combat versus backup categories. For example in fiscal year 1993, 65 percent of the Navy/Marine Corps fighter/attack aircraft were categorized for combat. In fiscal year 1996 that is planned to increase to 68 percent. The distribution of the planned F/A-18E/F aircraft procurement would increase the fighter/attack combat aircraft proportion to 70 percent. FMFIA requires ongoing evaluations of internal agency management controls and accounting systems and annual reports to the President and Congress on the condition of those systems. FMFIA is not limited to accounting or administrative matters. Rather, it is intended to address the entire range of policies and procedures that management employs to perform its mission efficiently and effectively. In February 1994, the Secretary of Defense directed all Assistant Secretaries of Defense to improve implementation of the FMFIA. Numerous audits by DOD and us, reports, and congressional testimony have shown that the Air Force and the Navy need to validate their backup aircraft criteria. In our view, the lack of valid criteria is a material weakness reportable under the FMFIA. In addition, to the extent that other program analyses rely on backup aircraft criteria, those analyses would share the same weakness. The Navy acknowledged this when it reported aircraft acquisition requirements processes (which used current backup aircraft criteria) as a material management weakness in its fiscal year 1993 and 1994 FMFIA reports. Attrition aircraft are used to replace combat-designated training, and test and evaluation aircraft lost in peacetime mishaps. In 1994, the Air Force Materiel Command developed a concept that could be used to support the services’ aircraft needs. Although the report on which that recommendation was based offered no specific cost savings, a 1992 Air Force-sponsored study compared 8 years of storage costs plus reconstitution costs to 8 years of operating costs for selected aircraft, including the F-15 and the F-16. The study concluded that storage and reconstitution costs were only 1.9 percent of the operating and maintenance costs for an F-15 and 2.1 percent of operating and maintenance costs for an F-16. Neither the Air Force nor the Navy/Marine Corps had exercised this option as of 1994. The services’ fiscal year 1996 plans show 218 attrition aircraft. Past attrition rates, however, show that some of these aircraft will not be needed for over 7 years. For example, over the past 5 years, the Air Force lost an average of about 17 F-16 aircraft per year to peacetime mishaps. On the basis of this rate, some of those F-16s will not be needed until the year 2002. However, the Air Force operates and maintains those aircraft in the same manner as combat-designated aircraft. That is, attrition aircraft are assigned to active and reserve units and the Air Force uses operation and maintenance funds that are appropriated for combat-designated, training, and test and evaluation aircraft to support attrition aircraft. In essence, funds that are expected to be used to operate and support combat-designated aircraft are being siphoned off to support attrition aircraft. Attrition aircraft operating and maintenance costs are difficult to determine. However, in 1994 the Air Force Logistics Management Agency estimated the annual incremental cost of one attrition F-16 in operating units to be $13,366. In fiscal year 1994, the Air Force provided Air National Guard units about $75,000 for each additional attrition aircraft in excess of the first three aircraft supported by the units. However, individual Guard units estimate annual operation and maintenance costs range from about $120,000 to $400,000 for each aircraft. According to Air National Guard and Air Force officials, as the number of authorized combat-designated aircraft assigned to each unit decreases, supporting attrition aircraft becomes more difficult. One unit has already reported a potential degradation of its combat-designated aircraft operation as a result of attrition aircraft that have been assigned to that unit. We recommend that the Secretary of Defense direct the Secretary of the Air Force to (1) develop and use supportable and consistent criteria to justify backup aircraft inventories and future procurement of backup aircraft as the Navy is doing and (2) report the lack of valid backup fighter/attack aircraft requirements criteria as a material management weakness, in compliance with FMFIA, until these criteria are developed and put in use. We also recommend that the Secretary of Defense direct the Secretary of the Air Force and the Secretary of the Navy to adjust backup aircraft inventories, where needed, to conform to supportable and consistent criteria once established. The comments DOD provided on a draft of this report appear in appendix IV. DOD partially concurred with the report. DOD believes more progress has been made in developing sound backup aircraft criteria than we describe. DOD agreed, however, that additional improvements may be necessary. Accordingly, DOD will undertake a review of the backup aircraft criteria. DOD concurred with our description of the trends in the number of backup aircraft maintained by the services, but commented there were inaccuracies in the report, apparently referring to the process we describe that arrived at the specific number of combat-designated aircraft in the forces. We believe our description of how the number of combat-designated aircraft was determined is accurately summarized, including reference to the Secretary of Defense’s January 1994 Annual Report to the President and the Congress. DOD only partially agreed with our analysis of actions taken in response to prior audit recommendations by others and us to validate backup aircraft requirements. According to DOD, both services have recognized a need to review their criteria. We believe this is a positive step. We also believe, however, that, in light of previous, largely unsuccessful efforts by others and us to persuade DOD and the services of the need to formulate valid backup aircraft criteria, actions now underway need to be part of a larger process to ensure those actions are fully implemented. The recommendations in this report are intended to help achieve that objective. The Air Force does not accept that past criticisms of its criteria, or revisions currently being made to its policies, reflect a material weakness reportable under FMFIA. We disagree. The Air Force’s and the Navy’s lack of supportable criteria has been the long-standing subject of numerous reports and recommendations by others and us for corrective action. Based on those reports, the Navy has identified the aircraft requirements process as a material weakness and established a time frame for corrective action. In light of new Air Force aircraft procurements potentially costing over $72 billion, we continue to believe the lack of valid backup aircraft criteria constitutes a material management weakness and reportable under FMFIA. DOD concurred with our conclusion that the procurement of F-22 and F/A-18E/F aircraft should be based on valid criteria. DOD partially agreed with our conclusion that unneeded attrition aircraft should be placed in storage. DOD, while citing Navy policy to store unneeded aircraft to save costs, noted the Air Force contention that the incremental cost to maintain such aircraft with the active forces is relatively small and these aircraft would be available for emergencies or other temporary needs. However, according to DOD, conclusive cost data is not yet available to support the Air Force’s contention. In light of the Navy’s retention policy, the analysis discussed in this report that compare storage and reconstitution costs against operating costs, and the need to base backup aircraft requirements on quantifiable needs, we continue to believe unneeded aircraft should not be operated and maintained with funds intended to support the authorized forces. DOD partially concurred with the recommendation that the Secretary of the Air Force develop supportable and consistent criteria to justify backup aircraft inventories and future procurements, and did not concur with a similarly directed recommendation to report backup fighter/attack aircraft requirements criteria as a material management weakness under FMFIA. Further, DOD partially concurred with the recommendation that the Air Force and the Navy adjust backup aircraft inventories to conform to supportable and consistent criteria. Considering the (1) lengthy history of reports concerning the need to strengthen the backup aircraft requirements determination criteria, (2) numerous recommendations to strengthen that process, (3) slow progress in that direction, and (4) planned procurements of costly F-22 and F/A-18E/F aircraft, we are retaining recommendations that identify the known weaknesses, and establish time frames for resolving those weaknesses through the FMFIA mechanism. We analyzed directives and other pertinent documents and interviewed agency officials regarding backup aircraft procurement planning criteria, inventory management requirements, and force reduction goals. We documented past findings and recommendations regarding backup inventories and criteria. We documented changes to backup criteria and other actions taken as a result of prior recommendations. Using the services’ fiscal years 1995 and 1996 programming plans and other service provided aircraft inventory data, we documented and compared reductions in combat and backup aircraft inventories for fiscal years 1993 and 1994 and projected inventories for fiscal years 1995 and 1996. We interviewed management officials at the Aerospace Maintenance and Regeneration Center at Davis-Monthan Air Force Base, Arizona, and reviewed studies regarding the potential for storing attrition aircraft until needed. We also visited operational units responsible for operating and maintaining backup aircraft, including active wings and squadrons, a training command, and Air National Guard units, to discuss the impact of these aircraft on unit operations and costs. We reviewed backup aircraft procurement plans to determine whether the standardized backup aircraft planning factors, previously reported as outdated and in need of revision, had been changed. We reviewed FMFIA reports prepared by the Air Force, the Navy, and DOD for fiscal years 1993 and 1994 to determine whether material weaknesses were reported in the area of aircraft requirements. We performed our review between October 1993 and February 1995 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense, the Air Force, and the Navy; the Director of the Office of Management and Budget; and other appropriate congressional committees. We will also make copies available to other interested parties upon request. Please contact me at (202)512-3504 if you have any questions concerning this report. Major contributors to this report are listed in appendix V. F-14 Aircraft Requirements Naval Audit Service (050-S-92, May 19, 1992). The Naval Audit Service reported that the Navy had overstated its need for backup F-14 training and maintenance aircraft. The Navy did not concur with the methodology the Naval Audit Service proposed to calculate training aircraft requirements, nor with a recommendation to reduce F-14 depot maintenance funding. The Navy did concur, in principle, with the recommendation that it develop plans to remove nonessential F-14s from its active inventory. Opportunities to Reduce the Number of Combat Aircraft Purchased for Noncombat Purposes (GAO Testimony, June 2, 1983). We questioned criteria used by services to justify the number of non-combat aircraft required. We questioned the training, maintenance, and attrition categories for the F-14, F-15, F-16, and F/A-18 and reported that the Department of Defense (DOD) had never reviewed support aircraft justifications as it said it would in 1977. DOD stated that, regardless of the justification, the support aircraft were necessary and would be used in war. F-16 Integrated Logistics Support: Still Time to Consider Economical Alternatives (GAO/LCD-80-89, Aug. 20, 1980). We questioned the Air Force’s stated requirement for a 10-percent increase in F-16 aircraft to compensate for aircraft in depot maintenance, since the aircraft was designed to eliminate planned depot maintenance. DOD stated that the 10-percent factor had been historically accurate for tactical fighter aircraft. The Congress Should Require Better Justification of Aircraft for Noncombat Missions (GAO/LCD-80-93, July 22, 1980). We recommended to Congress, on the basis of past work, that appropriations be withheld for procurement of F-14s, F-15s, F-16s, F/A-18s, and A-10s until the services justified their noncombat aircraft needs with current and realistic data. Operational and Support Costs of the Navy’s F/A-18 Can Be Substantially Reduced (GAO/LCD-80-65, June 6, 1980). We determined that the Navy overstated the need for F/A-18 maintenance backup aircraft because they had not fully factored in the F/A-18’s reliability and maintainability characteristics. Unnecessary Procurement of A-10 Aircraft for Depot Maintenance Floats (GAO/LCD-79-431, Sept. 6, 1979). We found that, despite the A-10’s design to eliminate depot-level maintenance, the Air Force continued to use the standard 10-percent reserve for maintenance to justify procurement. We recommended that DOD direct the Air Force to come up with more meaningful estimates to justify procurement. The Air Force responded they would study how to develop backup aircraft numbers. However, they generally felt the additional aircraft were needed. Letter to the Secretary of Defense (GAO/LCD-79-420, May 22, 1979). We restated our findings from our 1977 report and recommended that action be taken immediately to affect procurement of F-14s and F-15s. Need to Strengthen Justification and Approval Process for Military Aircraft Used for Training, Replacement, and Overhaul (GAO/LCD-77-423, Oct. 28, 1977). We examined inventories of F-15s and F-14s and found that backup requirements for training, attrition, and maintenance were overstated. We recommended that Congress require DOD to justify requirements for noncombat aircraft on realistic and supportable data. DOD agreed that all programs should be based on supportable data and announced that a review was underway to determine whether this was the case. Hugh E. Brady, Evaluator-in-Charge Frank R. Marsh, Evaluator Jeffrey C. McDowell, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the trend in the number of backup aircraft maintained by the Air Force and Navy, focusing on the: (1) actions that the Department of Defense (DOD) and the services have taken in response to prior recommendations to validate backup aircraft requirements; and (2) opportunities to remove unneeded backup aircraft from the force to minimize the cost of operating and maintaining combat-designated aircraft. GAO found that: (1) the Air Force and Navy/Marine Corps operate and maintain about one backup aircraft for every two combat-designated fighter aircraft; (2) the Air Force and Navy/Marine Corps will achieve the Bottom-Up Review's goals to reduce the size of the combat-designated aircraft forces by the end of fiscal year(FY) 1996; (3) the Air Force has not developed supportable plans for structuring and managing backup forces and justifying the procurement of backup aircraft; (4) realistic criteria are needed prior to the procurement of backup aircraft systems to prevent the Air Force and Navy/Marine Corps from buying more aircraft than needed; and (5) if attrition aircraft in excess of short-term needs are stored until needed, the Air Force could reduce operation and maintenance costs. |
FMS is the government’s financial manager, central disburser, and collections agency as well as its accountant and reporter of financial information. For fiscal year 2000, the U.S. government disbursed over $1.9 trillion primarily for Social Security and veterans’ benefit payments, Internal Revenue Service (IRS) tax refunds, federal employee salaries, and vendor billings. With several exceptions (the largest being the Department of Defense), FMS makes disbursements for most federal agencies. FMS is also responsible for administering the federal government’s collections system. In fiscal year 2000, the government collected over $2 trillion in taxes, duties, and fines. In addition, FMS oversees the federal government’s central accounting and reporting systems used to reconcile and keep track of the federal government’s assets and liabilities. Financial and budget execution information from these central systems is used by FMS to publish financial reports that are available for use by the Congress, the Office of Management and Budget, other federal agencies, and others who make financial decisions on behalf of the U.S. government. FMS maintains multiple financial and information systems to help it process and reconcile moneys disbursed and collected by the various government agencies. These banking, collection, and disbursement systems are also used to process agency transactions, record relevant data, transfer funds to and from the Treasury, and facilitate the reconciliation of those transactions. FMS has three data centers and also has three field operations centers that are responsible for issuing paper check and electronic funds transfer payments. In addition, FMS relies on a network of four contractor data centers and the FRBs to help carry out its financial management responsibilities. Our objectives were to evaluate and test the effectiveness of the computer controls over FMS’s key financial management systems and to determine the status of the computer control weaknesses discussed in our fiscal year 1999 audit report. We used a risk-based and rotation approach for testing general and application controls. Under that methodology, every 3 years, each data center and key financial application is subjected to a full-scope review that includes testing in all of the computer control areas defined in our Federal Information System Controls Audit Manual (FISCAM).During the interim years, we focus our testing on the FISCAM areas that we have determined to be at greater risk for computer control weaknesses. See appendix I for the scope and methodology of our fiscal year 2000 review at each of the selected data centers and for the key financial applications. During the course of our work, we communicated our findings to FMS management, which informed us of the actions FMS planned or had taken to address the weaknesses we identified. We performed our work from August 2000 through February 2001 in accordance with U.S. generally accepted government auditing standards. We requested comments on a draft of this report from the Department of the Treasury. The comments are discussed in the “Agency Comments and Our Evaluation” section of this report and reprinted in appendix II. An entity-wide program for security management is the foundation of an entity’s security control structure and should establish a framework for continual (1) risk assessments, (2) development and implementation of effective security procedures, and (3) monitoring and evaluation of the effectiveness of security procedures. A well-designed entity-wide security management program helps to ensure that security controls are adequate, properly implemented, and applied consistently across the entity and that responsibilities for security are clearly understood. The overriding reason that computer control problems at FMS continued to exist during fiscal year 2000 is that FMS does not have an effective entity-wide computer security management program (security program). In response to our prior years’ recommendation that FMS establish an effective security program, FMS completed its Information Technology Security Handbook for Major Application Systems in late 1999. During fiscal year 2000, the FMS Information Technology Security Policy Manual and Entity-Wide Information Technology Security Program manuals were approved and distributed. In addition, FMS developed an Entity-Wide Information Technology Security Program Implementation Strategy in March 2001. This document discusses FMS’s high-level strategy for the full implementation of its security program by a target date of September 30, 2002. In April 2001, FMS completed an internal assessment to determine the effectiveness of its security program and its initiatives. The assessment was performed using the Federal Information Technology Security Assessment Framework (Framework). The Framework provides a method for agency officials (1) to determine the current status of their security programs relative to existing policy and (2) where necessary, establish a target for improvement. The Framework identifies five levels of security program effectiveness—Level 1, Documented Policy; Level 2, Documented Procedures; Level 3, Implemented Procedures and Controls; Level 4, Tested and Reviewed Procedures and Controls; and Level 5, Fully Integrated Procedures and Controls. The five levels measure specific management, operational, and technical control objectives. Each of the five levels contains criteria to determine whether the level is adequately implemented, with each successive level representing a more complete and effective security program. FMS’s assessment did not contain an overall determination of the level of effectiveness of its security program. Our review of its assessment found that FMS identified as not being met 29 of the 45 control objectives that should be applied to a secure system and another 15 control objectives in which some aspects of the related performance criteria were identified as being partially met. As discussed above, FMS has taken steps toward improving its security program by developing policy manuals, an implementation strategy, and an internal self-assessment. Through its self-assessment, FMS has identified areas within its existing security program that need improvement in order to achieve a fully implemented Level 5 security program. However, FMS has not yet developed a detailed plan that describes the remedial actions; resources (physical, human capital, and fiscal); target dates; and responsible agency officials needed to correct the shortcomings of its security program. A Level 5 security program is described as a comprehensive and integral part of an agency’s organizational culture. The components of a fully integrated Level 5 security program include an active entity-wide security program that achieves cost-effective security, integration of information technology security through all aspects of the information technology life-cycle, understanding and management of security vulnerabilities, continual evaluation of threats and adjustment of controls to the changing security environment, identification of additional or more cost-effective security alternatives as the need arises, measurement of the costs and benefits of security, and establishment of status metrics to assess the effectiveness of the security program that are also met. FMS’s entity-wide security control structure has yet to address many of the weaknesses and related significant risks associated with its current and evolving computing environment. Our audits for fiscal years 2000, 1999, 1998, and 1997 have identified significant general computer control weaknesses at each of the FMS data centers. As shown in table 1, these weaknesses have involved each of the six general control areas defined in FISCAM at multiple FMS data centers. If FMS had a fully developed, implemented, and effective security program, weaknesses found in prior years, such as the following, would be less likely to reoccur. For example, at one data center, we found access control weaknesses during our fiscal year 2000 audit that were the same as or very similar to issues that we reported at other data centers in previous years’ audits. Although FMS took corrective actions to address the individual prior years’ weaknesses found at those specific data centers, FMS did not determine whether these weaknesses also existed at their other data centers that were using the same type of computing platform. Another example involved a data center that performed security violation and sensitive activity monitoring procedures over its legacy environments. However, FMS did not apply these same procedures and requirements to a new computing environment introduced during fiscal year 2000. Until FMS takes a more disciplined and structured approach to computer security through a fully implemented entity-wide security program, there is a significant increased risk that controls will not be adequate, properly implemented, or applied consistently across each of its data centers. Integral to all security programs is a continuous risk assessment process for determining the sensitivity of information and systems, acceptable levels of risk, and specific controls needed to provide adequate security over computer resources and data. Our May 1998 best practices guide on information security management practices at leading nonfederal organizations found that organizations successfully managed their information security risks through an ongoing cycle of risk management activities. During our fiscal year 2000 audit, we found that FMS had not developed a comprehensive entity-wide risk assessment to be used as a basis for establishing appropriate security policies and selecting cost- effective techniques for implementing policies. Documents and approaches that FMS had already developed could be used together to form the foundation of an entity-wide risk assessment. The absence of a comprehensive risk assessment that identifies entity-wide risks could lend itself to practices that are inconsistent with acceptable standards and expose FMS to increased weaknesses and unnecessary risks. General controls are the structure, policies, and procedures that apply to an entity’s overall computer operations. General controls establish the environment in which application systems and controls operate. In addition to an entity-wide security management program discussed above, they include access controls, system software controls, application software development and change controls, segregation of duties, and service continuity controls. An effective general control environment would (1) protect data, files, and programs from unauthorized access, modification, and destruction; (2) limit and monitor access to programs and files that control computer hardware and secure applications; (3) prevent the introduction of unauthorized changes to systems and applications software; (4) prevent any one individual from controlling key aspects of computer-related operations; and (5) ensure the recovery of computer processing operations in case of a disaster or other unexpected interruption. In addition to the weaknesses in its security program discussed above, our fiscal year 2000 review of FMS’s general computer controls also identified serious new general control weaknesses in access controls and system software. As we previously reported for fiscal years 1998 and 1999, FMS is continuing the process of moving one of its key financial applications to a distributed environment. As of June 30, 2001, FMS reported that approximately 81 percent of the users had converted to a new version of this application, and completion of the new application’s implementation was scheduled for September 30, 2001. FMS officials have informed us that they expect that the migration of its key financial application will facilitate the implementation of more effective controls in the future. Our fiscal year 2000 audit found that as of September 30, 2000, FMS had corrected or mitigated the risks associated with 35 of the 61 computer control weaknesses discussed in our prior year’s report. However, we are continuing to reaffirm our prior year’s recommendations to correct the remaining weaknesses discussed in our fiscal year 1999 report because of the significance of the associated risks and the lack of other effective compensating controls to mitigate those risks. Access controls are designed to limit or detect access to computer programs, data, equipment, and facilities to protect these resources from unauthorized modification, disclosure, loss, or impairment. Such controls include logical and physical security controls. Logical security control measures involve the use of computer hardware and security software programs to prevent or detect unauthorized access by requiring users to input unique user identifications (ID), passwords, or other identifiers that are linked to predetermined access privileges. Logical security controls restrict the access of legitimate users to the specific systems, programs, and files they need to conduct their work and prevent unauthorized users from gaining access to computing resources. Physical security controls include locks, guards, badges, alarms, and similar measures (used alone or in combination) that help to safeguard computer facilities and resources from intentional or unintentional loss or impairment by limiting access to the buildings and rooms where they are housed. Our review of FMS’s access controls identified a number of weaknesses at all of the sites we visited. These weaknesses, many of which were included in our prior years’ reports, included data centers that had weak network security configurations that allowed us to identify user names and compromise the associated passwords, which resulted in our gaining unauthorized access to the mainframe production environment of a key financial application at one data center, the development environments at another data center, and an unrelated procurement application at a third data center; granted excessive and powerful systems privileges to certain users who did not need such access; did not effectively manage the administration of certain passwords and were not always applying security system parameters so as to provide optimum security or appropriate segregation of duties; and were not effectively monitoring and controlling dial-in access to certain local area networks and the mainframe environments. In addition, physical security controls at three of the five sites we visited were not sufficient to control physical access to these centers. For example, at one data center management was not able to provide us with a list of individuals granted physical access to the building because the security system was not functioning properly. The risks created by these access control weaknesses were heightened because FMS was not adequately managing and monitoring user access activities. Program managers and security personnel did not consistently monitor and evaluate user access rights, security violations, and software security settings at many of the sites visited. Because of these identified access control weaknesses, FMS is also at risk that unauthorized activities, such as corruption of financial data, disclosure of sensitive data, or introduction of malicious programs or unauthorized modifications of software will go undetected. System software coordinates and helps control the input, processing, output, and data storage associated with all of the applications that run on a system. System software includes operating system software, system utilities, program library systems, file maintenance software, security software, data communications systems, and database management systems. Controls over access to and modification of system software are essential to protect the overall integrity and reliability of information systems. During our fiscal year 2000 audit, we found system software weaknesses at four of the five sites we visited. Specifically, we found duplicate software modules in certain libraries, lack of procedures to ensure system software changes were properly documented, and the inability for a system to generate reports needed to monitor user activities. These weaknesses increase the risk of obsolete or inappropriate versions of a program executing and causing unexpected results, unauthorized changes to system software, or unauthorized access to sensitive systems. Controls over the design, development, and modification of application software help to ensure that all programs and program modifications are properly authorized, tested, and approved. Such controls also help prevent security features from being inadvertently or deliberately turned off and processing irregularities or malicious code from being introduced. We found application software development and change control weakness at one of the five FMS sites we visited. As we reported in the prior year, we found during our fiscal year 2000 audit that a significant weakness at the site was that policies and procedures over system design, development, and modification were not established, were inadequate, or were simply not being followed. Without other effective compensating controls in place, failure to implement a disciplined approach to application software development and change controls may result in changes that are not tested, documented, or approved. Another key control for safeguarding programs and data is to ensure that duties and responsibilities for authorizing, processing, recording, and reviewing data, as well as initiating, modifying, migrating, and testing programs, are separated to reduce the risk that errors or fraud will occur and go undetected. Duties that should be appropriately segregated include applications and system programming and responsibilities for computer operations, security, and quality assurance. Policies outlining the supervision and assignment of responsibilities to groups and related individuals should be documented, communicated, and enforced. As we reported in the prior year, we also found during our fiscal year 2000 audit that the programmers at one data center were also serving as backup computer operators, which significantly increases the risk for unauthorized or inappropriate changes to production data and source code or disclosure of sensitive data. An organization’s ability to accomplish its mission can be significantly affected if it loses the ability to process, retrieve, and protect information that is maintained electronically. For this reason, organizations should have (1) established procedures for protecting information resources and minimizing the risk of unplanned interruptions and (2) plans for recovering critical operations should interruptions occur. A contingency or disaster recovery plan specifies emergency response, backup operations, and postdisaster recovery procedures to ensure the availability of critical resources and facilitate the continuity of operations in an emergency situation. It addresses how an organization will deal with a full range of contingencies, from electrical power failures to catastrophic events, such as earthquakes, floods, and fires. The plan also identifies essential business functions and ranks resources in order of criticality. To be most effective, a contingency plan should be periodically tested in disaster simulation exercises and employees should be trained in and familiar with its use. Because it is not cost-effective to provide the same level of continuity for all operations, it is important that organizations analyze relevant data and operations to determine which are the most critical and what resources are needed to recover and support them. As discussed in our May 1998 best practices guide, the criticality and sensitivity of various data and operations should be determined and ranked based on an overall risk assessment of the entity’s operations. Factors to be considered include the importance and sensitivity of the data and other organizational assets handled or protected by the individual operations and the cost of not restoring data or operations promptly. During our fiscal year 2000 follow-up review of FMS’s service continuity, we found that FMS management was still in the process of developing an entity-wide service continuity plan. Consequently, the FMS data centers were still at significant risk that in the event of an emergency or disaster, data center personnel might not be prepared to effectively prioritize recovery activities, integrate recovery steps, or fully recover systems. FMFIA requires ongoing evaluations of the internal control and accounting systems that protect federal programs against fraud, waste, abuse, and mismanagement. It further requires that the heads of federal agencies report annually to the president on the condition of these controls and systems and on their actions to correct the weaknesses identified. During the course of our work, we communicated our general computer control findings to FMS management. As a result, FMS reported its general computer control problems as a material weakness to the Department of the Treasury. The Department of the Treasury reported in its fiscal year 2000 Accountability Report that FMS, along with other Treasury components, had a material weakness in general computer controls designed to safeguard data, protect computer application programs, protect system software from unauthorized access, and ensure continued computer operations. Application controls relate directly to the individual computer programs, which are used to perform certain types of work, such as generating payments or recording transactions in a general ledger. In an effective general control environment, application controls help to further ensure that transactions are valid, properly authorized, and completely and accurately processed and reported. Authorization controls for specific applications, similar to general access controls, should be established to (1) ensure individual accountability and proper segregation of duties, (2) ensure that only authorized transactions are entered into the application and processed by the computer, (3) limit the processing privileges of individuals, and (4) prevent and detect inappropriate or unauthorized activities. Our fiscal year 2000 review of FMS’s authorization controls found that a number of the weaknesses discussed in our fiscal years 1998 and 1999 reports remained uncorrected and certain new weaknesses over one key financial application were identified. These weaknesses included inappropriate access to application functions and privileges that were not required by the users’ job responsibilities and that in some instances also created an inadequate segregation of duties, users sharing IDs or being assigned multiple IDs without a functional security reports not being consistently monitored or followed up on, application passwords not being properly managed, lack of certain user access request documentation and recertifications, and lack of documented policies and procedures for an application. The authorization control weaknesses described above increase the risk of unauthorized activities such as inappropriate processing of transactions, unauthorized access or disclosure of sensitive data, corruption of financial data, or disruption of operations. Completeness controls are designed to ensure that all transactions are processed and missing transactions are identified. Common completeness controls include the use of record counts and control totals, computer sequence checking, computer matching of transaction data with data in a master or suspense file, and checking of reports for transaction data. Our fiscal year 2000 review of completeness controls over one key FMS financial application found that input data edit and validation procedures were not complete, software needed to monitor modifications to the database was not in place, and application recovery policies and procedures were not written. As a result, there was an increased risk of processing incomplete or erroneous data and disruption of operations. Because the FRBs are integral to the operations of FMS, we assessed the effectiveness of general and application controls that support key FMS financial systems maintained and operated by the FRBs. Overall, we found that the FRBs had implemented effective general and application controls. Our fiscal year 2000 audit procedures identified certain new vulnerabilities in general controls that did not pose significant risks to the FMS financial systems but nonetheless warranted FRB management’s attention and action. These included vulnerabilities in general controls over (1) access to data, programs, and computing resources; (2) system software; and (3) service continuity. Our follow-up work found that the FRBs had corrected or mitigated the risks associated with all of the vulnerabilities that were identified in our prior year’s report. We provided details of these matters in a separate letter to the Board of Governors of the Federal Reserve System along with our recommendations for improvement. FRB management has informed us that the FRBs have taken or plan to take corrective actions to address the vulnerabilities related to FMS systems that we identified. The pervasiveness of the computer control weaknesses—both old and new—at FMS and its contractor data centers place billions of dollars of payments and collections at risk of loss or fraud. Sensitive data are at risk of inappropriate disclosure, and computer-based operations are at risk of disruption. The severity of these risks magnifies as FMS expands its networked environment through the migration of its financial applications from mainframes to client-server environments. Thus, as FMS provides users greater and easier access to larger amounts of data and system resources, well-designed and effective general and application controls are essential if FMS’s operations and computer resources are to be properly protected. It will take a significant and sustained commitment by FMS’s management to fully address its serious computer control weaknesses, including fully implementing an effective security program. In our December 14, 2001, Limited Official Use version of this report, we reaffirmed our prior year’s recommendations that the secretary of the Treasury direct the commissioner of the Financial Management Service, along with the assistant commissioner for information resources, to fully implement an effective security program, to correct each individual weakness that we identified and address each of the 85 specific recommendations detailed in the December 14, 2001 report, and to work with the FRBs to monitor corrective actions taken to resolve the computer control vulnerabilities related to FMS systems supported by the FRBs that we identified and communicated to the FRBs. In addition, we recommended that FMS develop a detailed plan that describes the remedial actions, resources, target dates, and responsible agency officials to facilitate the implementation of its security program. In commenting on a draft of the Limited Official Use version of this report, FMS stated that it understands that additional improvements are needed and recognizes the importance of having an effective entity-wide security program, as well as strong internal control, given its critical payment, collections, and government-wide accounting responsibilities. FMS also stated that actions were under way to address the individual audit findings. Further, FMS stated that computer security remains one of FMS’s top priorities and that it is completely dedicated to fully implementing and maintaining an effective and robust security program. FMS also stated that it has made great strides in eliminating the vulnerabilities caused by old legacy systems and obsolete technology, resulting in a significant reduction in risks that is not reflected in our report. We believe the report adequately reflects such progress for actions taken by FMS during fiscal year 2001. In particular, our report noted that continued progress has been made in the replacement of FMS’s key financial application (which is used by federal agencies to account for their disbursement and receipt activities) by a new version of the application on a distributed computing platform. However, our work over the past 4 years has continued to identify serious issues at FMS. As we stated in our report, FMS’s entity-wide security control structure has yet to address many of the weaknesses and related significant risks associated with its current and evolving computing environment. For example, we found that FMS’s corrective actions were not being implemented on an entity-wide basis. Weaknesses found and corrected in prior years at certain data centers were identified during the current year audit at another data center. These weaknesses in FMS’s computer security controls not only affect the effectiveness of computer security over the new applications recently moved to distributed environments, but also FMS’s other key financial applications that are used to collect from and pay to the public billions of dollars annually. In its comment letter, FMS pointed out that the general consensus of the members of the Treasury CIO Council is that a computer security program that achieves Level 3 effectiveness (which is reached when computer security procedures and technical controls are implemented) is an appropriate standard and should be the department’s objective. However, we believe that an effective entity-wide security program is achieved at Level 5 and is the appropriate level for Treasury given its government-wide responsibilities as financial manager, central disburser, and collections agency as well as accountant and reporter of financial information. The need for Treasury to implement an effective and fully integrated entity- wide security program is further underscored by recent events and reports that critical federal operations and assets continue to be highly vulnerable to computer-based attacks. It is important to note that until an entity has accomplished the Level 4 goal that requires the testing of security procedures and technical controls, it does not have reasonable assurance that the documented controls developed in Levels 1 through 3 have been effectively implemented. A fully integrated Level 5 security program helps to ensure that an organization has incorporated the fundamental activities needed to manage information security risks cost-effectively and is not reacting to individual problems on an adhoc basis only after a problem has been detected. Since FMS’s systems process and account for billions of dollars in transactions, we are encouraged that FMS has a goal of continuing to strive for a high level of security effectiveness. While its near term goal of achieving Level 3 effectiveness is commendable, we cannot overemphasize the need for FMS management to make a focused and sustained commitment to accelerate the full implementation of an effective entity-wide security program. We will follow up on these matters during our audit of the federal government’s fiscal year 2001 financial statements. In addition to its written comments, the staff of FMS provided technical comments, which have been incorporated as appropriate. We are sending copies of this report to the chairmen and ranking minority members of the Senate Committee on Appropriations; Senate Committee on Finance; Senate Committee on Governmental Affairs; Senate Committee on the Budget; Subcommittee on Treasury and General Government, Senate Committee on Appropriations; House Committee on Appropriations; House Committee on Ways and Means; House Committee on Government Reform; House Committee on the Budget; Subcommittee on Government Efficiency, Financial Management, and Intergovernmental Relations, House Committee on Government Reform; and Subcommittee on Treasury, Postal Service, and General Government, House Committee on Appropriations. We are also sending copies of this report to the commissioner of the Financial Management Service, the inspector general of the Department of the Treasury, the director of the Office of Management and Budget, and other agency officials. Copies will also be made available to others upon request. If you have any questions regarding this report, please contact me at (202) 512-3406. Key contributors to this assignment were Paula M. Rascona, Daniel G. Mesler, and Mickie E. Gray. We used a risk-based and rotation approach for testing general and application controls. Under that methodology, every 3 years each data center and key financial application is subjected to a full-scope review that includes testing in all of the computer control areas defined in the FISCAM. During the interim years, we focus our testing on the FISCAM areas that we have determined to be at greater risk for computer control weaknesses. The scope of our work for fiscal year 2000 included follow-up on weaknesses discussed in our fiscal year 1999 report and a focused review at three data centers of the two general control areas intended to protect data, files, and programs from unauthorized access, modification, and destruction and limit and monitor access to system software programs and files that control computer hardware and secure applications; a focused review at a fourth data center of the three general control areas intended to protect data, files, and programs from unauthorized access, modification, and destruction; prevent the introduction of unauthorized changes to systems and ensure the recovery of computer processing operation in case of a disaster or other unexpected interruption. We limited our work at another data center to a follow-up review of the status of weaknesses discussed in our fiscal year 1999 report. We limited our testing of FMS’s entity-wide security program to a comparison of FMS’s information security manuals with our executive guide on information security management. We performed a full-scope application control review of one key FMS financial application to determine whether the application is designed to ensure that access privileges (1) establish individual accountability and proper segregation of duties, (2) limit the processing privileges of individuals, and (3) prevent and detect inappropriate or unauthorized activities; data are authorized, converted to an automated format, and entered into the application accurately, completely, and promptly; data are properly processed by the computer and files are updated erroneous data are captured, reported, investigated, and corrected; and files and reports generated by the application represent transactions that actually occur and accurately reflect the results of processing, and reports are controlled and distributed to the authorized users. We limited our work over another seven key financial applications to a follow-up review of the status of weaknesses discussed in our fiscal year 1999 report. To evaluate the general and application controls, we identified and reviewed FMS’s information system general and application control policies and procedures; observed controls in operation; conducted tests of controls, which included selecting items using a method in which the results are not projectable to the population; and held discussions with officials at selected FMS data centers to determine whether controls were in place, adequately designed, and operating effectively. We performed network vulnerability assessment testing at three data centers. Through our network security vulnerability assessments, we attempted to access sensitive data and programs. These attempts were performed with the knowledge and cooperation of appropriate FMS officials. The scope of our network vulnerability assessment testing was limited by a fourth data center to a review of a redacted report prepared by other auditors. Because the FRBs are integral to the operations of FMS, we followed up on the status of the FRBs’ corrective actions to address vulnerabilities discussed in our fiscal year 1999 report. We assessed general controls over FMS systems that the FRBs maintain and operate, and we evaluated application controls over two key FMS financial applications. To assist in our evaluation and testing of computer controls, we contracted with the independent public accounting firm PricewaterhouseCoopers, LLP. We determined the scope of our contractor’s audit work, monitored its progress, and reviewed the related working papers to ensure that the resulting findings were adequately supported. During the course of our work, we communicated our findings to FMS management. We performed our work from August 2000 through February 2001 in accordance with U.S. generally accepted government auditing standards. | The Financial Management Service's (FMS) overall security control environment continues to be ineffective in identifying, deterring, and responding to computer control weaknesses promptly. Consequently, billions of dollars of payments and collections are at significant risk of loss or fraud, sensitive data are at risk of inappropriate disclosure, and critical computer-based operations are vulnerable to serious disruptions. During its fiscal year 2000 audit, GAO found new general computer control weaknesses in the entity-wide security management program, access controls, and system software. GAO also identified new weaknesses in the authorization and completeness controls over one key FMS financial application. GAO's follow-up on the status of FMS's corrective actions to address weaknesses discussed in its fiscal year 1999 report found that, as of September 30, 2000, FMS had corrected or mitigated the risks associated with 35 of the 61 computer control weaknesses discussed in that report. To assist FMS management in addressing its computer control weaknesses, GAO made four overall recommendations in this public report. |
Title XIX of the Social Security Act establishes Medicaid as a joint federal- state program to finance health care for certain low-income children, families, and individuals who are aged or disabled. Medicaid is an open- ended entitlement program, under which the federal government is obligated to pay its share of expenditures for covered services provided to eligible individuals under each state’s federally approved Medicaid plan. States operate their Medicaid programs by paying qualified health care providers for a range of covered services provided to eligible beneficiaries and then seeking reimbursement for the federal share of those payments. CMS has an important role in ensuring that states comply with statutory Medicaid payment principles when claiming federal reimbursements for payments made to institutional and other providers who serve Medicaid beneficiaries. For example, Medicaid payments must be “consistent with efficiency, economy, and quality of care,” and states must share in Medicaid costs in proportions established according to a statutory formula. Within broad federal requirements, each state administers and operates its Medicaid program in accordance with a state Medicaid plan, which must be approved by CMS. A state Medicaid plan details the populations a state’s program serves, the services the program covers (such as physicians’ services, nursing home care, and inpatient hospital care), and the rates of and methods for calculating payments to providers. State Medicaid plans generally do not detail the specific arrangements a state uses to finance the nonfederal share of program spending. Title XIX of the Social Security Act allows states to derive up to 60 percent of the nonfederal share from local sources, as long as the state itself contributes at least 40 percent. Over the last several years, CMS has taken a number of steps to help ensure the fiscal integrity of the Medicaid program. These include making internal organizational changes that centralize the review of states’ Medicaid financing arrangements and hiring additional staff to review each state’s Medicaid financing. The agency also published in May 2007 a final rule related to Medicaid payment and financing. This rule would, among other things, limit payments to government providers to their cost of providing Medicaid services. The Secretary is prohibited by law from implementing the rule until May 25, 2008. From 1994 to 2005, we have reported numerous times on a number of financing arrangements that create the illusion of a valid state Medicaid expenditure to a health care provider. Payments under these arrangements have enabled states to claim federal matching funds regardless of whether the program services paid for had actually been provided. As various schemes have come to light, Congress and CMS took several actions from 1987 through 2002, through law and regulation, to curtail them (see table 1). Many of these arrangements involve payment arrangements between the state and government-owned or government-operated providers, such as local-government-operated nursing homes. They also involved supplemental payments—payments states made to these providers separate from and in addition to those made at a state’s standard Medicaid payment rate. The supplemental payments connected with these arrangements were illusory, however, because states required these government providers to return part or all of the payments to the states. Because government entities were involved, all or a portion of the supplemental payments could be returned to the state through an intergovernmental transfer, or IGT. Financing arrangements involving illusory payments to Medicaid providers have significant fiscal implications for the federal government and states. The exact amount of additional federal Medicaid funds generated through these arrangements is not known, but was in the billions of dollars. For example, a 2001 regulation to curtail misuse of the UPL regulation was estimated to have saved the federal government approximately $17 billion from fiscal year 2002 through fiscal year 2006. In 2003, we designated Medicaid to be a program at high risk of mismanagement, waste, and abuse, in part due to concerns about states’ use of inappropriate financing arrangements. States’ use of these creative financing mechanisms undermined the federal-state Medicaid partnership as well as the program’s fiscal integrity in three ways. First, inappropriate state financing arrangements effectively increased the federal matching rate established under federal law by increasing federal expenditures while state contributions remained unchanged or even decreased. Figure 1 illustrates a state’s arrangement in place in 2004 in which the state increased federal expenditures without a commensurate increase in state spending. In this case, the state made a $41 million supplemental payment to a local-government hospital. Under its Medicaid matching formula, the state paid $10.5 million and CMS paid $30.5 million as the federal share of the supplemental payment. After receiving the supplemental payment, however, the hospital transferred back to the state approximately $39 million of the $41 million payment, retaining just $2 million. Creating the illusion of a $41 million hospital payment when only $2 million was actually retained by the provider enabled the state to obtain additional federal reimbursements without effectively contributing a nonfederal share—in this case, the state actually netted $28.5 million as a result of the arrangement. Second, CMS had no assurance that these increased federal matching payments were retained by the providers and used to pay for Medicaid services. Federal Medicaid matching funds are intended for Medicaid- covered services for the Medicaid-eligible individuals on whose behalf payments are made. Under these arrangements, however, payments for such Medicaid-covered services were returned to the states which could then use the returned funds at their own discretion. In 2004, we examined how six states with large supplemental payment financing arrangements involving nursing homes used the federal funds they generated. As in the past, some states deposited excessive funds from financing arrangements into their general funds, which may or may not be used for Medicaid purposes. Table 2 provides further information on how states used their funds from supplemental payment arrangements, as reported by the six states we reviewed in 2004. Third, these state financing arrangements undermined the fiscal integrity of the Medicaid program because they enabled states to make payments to government providers that significantly exceeded their costs. In our view, this practice was inconsistent with the statutory requirement that states adopt methods to ensure that Medicaid payments are consistent with economy and efficiency. Our March 2007 report on a recent CMS oversight initiative to end certain financing arrangements where providers did not retain the payments provides context for CMS’s May rule. Responding to concerns about states’ continuing use of creative financing arrangements to shift costs to the federal government, CMS has taken steps starting in August 2003 to end inappropriate state financing arrangements by closely reviewing state plan amendments on a state-by-state basis. As a result of CMS initiative, from August 2003 through August 2006, 29 states ended one or more arrangements for financing supplemental payments, because providers were not retaining the Medicaid payments for which states had received federal matching funds. We found CMS’s actions under its oversight initiative to be consistent with Medicaid payment principles—for example, that payment for services be consistent with efficiency and economy. We also found, however, that CMS’s initiative to end inappropriate financing arrangements lacked transparency, in that CMS had not issued written guidance about the specific approval standards for state financing arrangements. CMS’s initiative was a departure from the agency’s past oversight approach, which did not focus on whether individual providers were retaining the supplemental payments they received. In contacting the 29 states that ended a financing arrangement from August 2003 through August 2006 under the initiative, only 8 states reported they had received any written guidance or clarification from CMS regarding appropriate and inappropriate financing arrangements. CMS had not used any of the means by which it typically provides information to states about the Medicaid program, such as its published state Medicaid manual, standard letters issued to all state Medicaid directors, or technical guidance manuals, to inform states about the specific standards it used for reviewing and approving states’ financing arrangements. State officials told us it was not always clear what financing arrangements CMS would allow and why arrangements approved in the past would no longer be approved. Twenty- four of 29 states reported that CMS had changed its policy regarding financing arrangements, and 1 state challenged CMS’s disapproval of its state plan amendment, in part on the grounds that CMS changed its policy regarding payment arrangements without rule making. The lack of transparency in CMS’s review standards raised questions about the consistency with which states had been treated in ending their financing arrangements. We consequently recommended that CMS issue guidance to clarify allowable financing arrangements. Our recommendation for CMS to issue guidance for allowable financing arrangements paralleled a recommendation we had made in earlier work reviewing states’ use of consultants on a contingency-fee basis to maximize federal Medicaid revenues. Our work found problematic projects where claims for federal matching funds appeared to be inconsistent with CMS’s policy or with federal law, or that—as with inappropriate supplemental payment arrangements—undermined Medicaid’s fiscal integrity. Several factors contributed to the risk of state projects. Many were in areas where federal requirements had been inconsistently applied, evolving, or not specific. We recommended that CMS establish or clarify and communicate its policies in these areas, including supplemental payment arrangements. CMS responded that clarifying guidance was under development for targeted case management, rehabilitation services, and supplemental payment arrangements. We have recently initiated work to examine CMS’s current oversight of certain types of state financing arrangements. We have not reported on CMS’s May 2007 rule or other rules related to Medicaid financing issued this year. The extent to which the rule will address concerns about the transparency of CMS’s initiative and review standards will depend on how CMS implements it. As the nation’s health care safety net, the Medicaid program is of critical importance to beneficiaries and the providers that serve them. The federal government and states have a responsibility to administer the program in a manner that assures expenditures benefit those low-income people for whom benefits were intended. With annual expenditures totaling more than $300 billion per year and growing, accountability for the significant program expenditures is critical to providing those assurances. The program’s long-term fiscal sustainability is important for beneficiaries, providers, states, and the federal government. For more than a decade, we have reported on various methods that states have used to inappropriately maximize federal Medicaid reimbursement, and we have made recommendations to end these inappropriate financing arrangements. Supplemental payments involving government providers have resulted in billions of excess federal dollars for states, yet accountability for these payments—assurances that they are retained by providers of Medicaid services to Medicaid beneficiaries—has been lacking. CMS has taken important steps in recent years to improve its financial management of Medicaid. Yet more can be done to enhance the transparency of CMS oversight. Consequently, we believe our recommendations regarding the clarification and communication of allowable financing arrangements remain valid. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions that you or Members of the Committee may have. For future contacts regarding this testimony, please contact Marjorie Kanof at (202) 512-7114 or Kanofm@gao.gov. Katherine Iritani, Assistant Director; Ted Burik; Tim Bushfield; Tom Moscovitch; and Terry Saiki made key contributions to this statement. Medicaid Financing: Federal Oversight Initiative Is Consistent with Medicaid Payment Principles but Needs Greater Transparency. GAO-07-214. Washington, D.C.: March 30, 2007. Medicaid Financial Management: Steps Taken to Improve Federal Oversight but Other Actions Needed to Sustain Efforts. GAO-06-705. Washington, D.C.: June 22, 2006. Medicaid: States’ Efforts to Maximize Federal Reimbursements Highlight Need for Improved Federal Oversight. GAO-05-836T. Washington, D.C.: June 28, 2005. Medicaid Financing: States’ Use of Contingency-Fee Consultants to Maximize Federal Reimbursements Highlights Need for Improved Federal Oversight. GAO-05-748. Washington, D.C.: June 28, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. Medicaid: Intergovernmental Transfers Have Facilitated State Financing Schemes. GAO-04-574T. Washington, D.C.: March 18, 2004. Medicaid: Improved Federal Oversight of State Financing Schemes Is Needed. GAO-04-228. Washington, D.C.: February 13, 2004. Major Management Challenges and Program Risks: Department of Health and Human Services. GAO-03-101. Washington, D.C.: January 2003. Medicaid: HCFA Reversed Its Position and Approved Additional State Financing Schemes. GAO-02-147. Washington, D.C.: October 30, 2001. Medicaid: State Financing Schemes Again Drive Up Federal Payments. GAO/T-HEHS-00-193. Washington, D.C.: September 6, 2000. Medicaid: States Use Illusory Approaches to Shift Program Costs to Federal Government. GAO/HEHS-94-133. Washington, D.C.: August 1, 1994. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Medicaid, a joint federal-state program, financed the health care for about 60 million low-income people in fiscal year 2005. States have considerable flexibility in deciding what medical services and individuals to cover and the amount to pay providers, and the federal government reimburses a proportion of states' expenditures according to a formula established by law. The Centers for Medicare & Medicaid Services (CMS) is the federal agency responsible for overseeing Medicaid. Growing pressures on federal and state budgets have increased tensions between the federal government and states regarding this program, including concerns about whether states were appropriately financing their share of the program. GAO's testimony describes findings from prior work conducted from 1994 through March 2007 on (1) certain inappropriate state Medicaid financing arrangements and their implications for Medicaid's fiscal integrity, and (2) outcomes and transparency of a CMS oversight initiative begun in 2003 to end such inappropriate arrangements. GAO has reported for more than a decade on varied financing arrangements that inappropriately increase federal Medicaid matching payments. In reports issued from 1994 through 2005, GAO found that some states had received federal matching funds by paying certain government providers, such as county operated nursing homes, amounts that greatly exceeded established Medicaid rates. States would then bill CMS for the federal share of the payment. However, these large payments were often temporary, since some states required the providers to return most or all of the amount. States used the federal matching funds obtained in making these payments as they wished. Such financing arrangements had significant fiscal implications for the federal government and states. The exact amount of additional federal Medicaid funds generated through these arrangements is unknown, but was in the billions of dollars. Because such financing arrangements effectively increase the federal Medicaid share above what is established by law, they threaten the fiscal integrity of Medicaid's federal and state partnership. They shift costs inappropriately from the states to the federal government, and take funding intended for covered Medicaid costs from providers, who do not under these arrangements retain the full payments. In 2003, CMS began an oversight initiative that by August 2006 resulted in 29 states ending inappropriate financing arrangements. Under the initiative, CMS sought satisfactory assurances that a state was ending financing arrangements that the agency found to be inappropriate. According to CMS, the arrangements had to be ended because the providers did not retain all payments made to them but returned all or a portion to the states. GAO reported in 2007 that, although CMS's initiative was consistent with Medicaid payment principles, it was not transparent in implementation. CMS had not used any of the means by which it normally provides states with information about Medicaid program requirements, such as the published state Medicaid manual, standard letters issued to all state Medicaid directors, or technical guidance manuals. Such guidance could be helpful to inform states about the specific standards it used for reviewing and approving states' financing arrangements. In May 2007, CMS issued a final rule that would limit Medicaid payments to government providers' costs. GAO has not reported on CMS's rule. |
As part of our audit of the fiscal years 2013 and 2012 CFS, we considered the federal government’s financial reporting procedures and related internal control. Also, we determined the status of corrective actions taken by Treasury and OMB to address open recommendations relating to the processes used by them to prepare the CFS that were detailed in our previous reports. A full discussion of our scope and methodology is included in our February 2014 report on our audit of the fiscal years 2013 and 2012 CFS. We have communicated each of the control deficiencies discussed in this report to your staff. We performed our audit of the fiscal years 2013 and 2012 CFS in accordance with U.S. generally accepted government auditing standards. We believe that our audit provided a reasonable basis for our conclusions in this report. We requested comments on a draft of this report from the Director of OMB and the Secretary of the Treasury or their designees. OMB provided oral comments, which are summarized in the Agency Comments section of this report. Treasury’s Fiscal Assistant Secretary provided written comments on June 10, 2014, which are reprinted in their entirety in appendix II and are also summarized in the Agency Comments section. During our audit of the fiscal year 2013 CFS, we identified several new internal control deficiencies in Treasury’s and OMB’s processes used to prepare the CFS. Specifically, we found that (1) Treasury’s and OMB’s corrective action plans are not adequate to reasonably assure that internal control deficiencies involving the processes used to prepare the CFS are efficiently and effectively addressed, (2) Treasury does not have procedures to sufficiently document management’s conclusions and the basis for such conclusions regarding accounting policies for the CFS, and (3) Treasury does not have adequate procedures for verifying staff’s preparation of the narrative within the notes to the CFS to reasonably assure that the narrative is accurate and supported by the underlying financial information of the significant component entities. We also updated the description of the control deficiencies related to the long- standing material weakness regarding the federal government’s inability to adequately account for and reconcile intragovernmental activity and balances between federal entities. Treasury’s and OMB’s corrective action plans are not adequate to reasonably assure that internal control deficiencies involving the processes used to prepare the CFS are efficiently and effectively addressed. Corrective action plans are the mechanism whereby management presents the procedures the agency will follow to resolve internal control deficiencies. Well-defined corrective action plans provide benefits such as consistency in addressing internal control deficiencies, concurrence on remediation activities, transparency in defining accountability and responsibility to ensure that results are achieved, and improved decision making on the status of remediation activities. An effective corrective action planning framework facilitates corrective action plan preparation, accountability, monitoring, and communication and helps ensure that the agency personnel responsible for completing the planned corrective actions and monitoring progress toward resolution have the information and resources they need to do so. The Chief Financial Officers Council’s Implementation Guide for OMB Circular A-123, Management’s Responsibility for Internal Control – Appendix A, Internal Control over Financial Reporting (Implementation Guide) includes guidance for preparing well-defined corrective action plans. Based on the guidance included in the Implementation Guide, well-defined corrective action plans include the following elements for each deficiency: (1) descriptions of the deficiency and the planned corrective actions in sufficient detail to facilitate a common understanding of the deficiency and the steps that must be performed to resolve it; (2) a clear delineation of agency personnel responsible for completing the planned corrective actions and monitoring progress toward resolution; (3) the year the deficiency was first identified; (4) the targeted resolution date; (5) interim targeted milestones and completion dates, including subordinate indicators, statistics, or metrics used to gauge resolution progress; and (6) planned validation activities and outcome measures used for assessing the effectiveness of the corrective actions taken. During our audit of the fiscal year 2013 CFS, we found that Treasury’s and OMB’s corrective action plans lacked certain key elements as recommended by the Implementation Guide. For example, these plans do not contain sufficiently detailed actions that must be performed to resolve each of the deficiencies, interim milestones so that interim actions and progress can be monitored and progress assessed, and outcome measures to assist in assessing the effectiveness of the corrective actions. In addition, we found that Treasury’s and OMB’s corrective action plans do not fully consider the interrelationships between deficiencies, such as designing a corrective action that will help resolve multiple deficiencies. Standards for Internal Control in the Federal Government provides that federal agencies should establish policies and procedures for promptly resolving findings of audits and other reviews. In addition, OMB Circular No. A-123, Management’s Responsibility for Internal Control, requires management to develop corrective action plans for material weaknesses and periodically assess and report on the progress of those plans. The Implementation Guide is widely viewed as a “best practices” methodology for executing the requirements of Appendix A of OMB Circular No. A-123. Without well-defined corrective action plans, Treasury’s and OMB’s efforts to address the numerous issues involving the processes used to prepare the CFS will be hampered. To efficiently and effectively address internal control deficiencies involving the processes used to prepare the CFS, we recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to include all key elements recommended by the Implementation Guide and fully consider the interrelationships between deficiencies in the corrective action plans. Treasury does not have procedures to sufficiently document management’s conclusions and the basis for such conclusions regarding the accounting policies for the CFS. Accounting policies are the mechanism whereby management summarizes the accounting principles and methods of applying those principles that management has concluded are appropriate for presenting fairly the entity’s assets, liabilities, net cost of operations, and changes in net position. Such policies should sufficiently document management’s conclusions regarding fair presentation and the basis of such conclusions. When sufficiently documented, accounting policies help ensure the consistent application of accounting principles by management from period to period and over similar classes of transactions, account balances, and disclosures. A summary of the significant accounting policies for the CFS is disclosed in Note 1 to the CFS, as required by U.S. generally accepted accounting principles. During our audit of the fiscal year 2013 CFS, we found that Treasury had not sufficiently documented management’s conclusions and the basis of such conclusions regarding the accounting policies for the CFS. Standards for Internal Control in the Federal Government provides that federal agencies should clearly document internal control activities—the policies, procedures, techniques, and mechanisms that enforce management’s directives—and properly manage and maintain such documentation so that it will be readily available for examination. In addition, OMB Circular No. A-123 requires management to document the decisions made during its process for developing and maintaining effective internal control over financial reporting. Without sufficiently documented accounting policies that include management’s conclusions regarding fair presentation and the basis of such conclusions, Treasury cannot be assured that accounting principles will be applied consistently from period to period and over similar classes of transactions, account balances, and disclosures. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to develop and implement procedures to sufficiently document management’s conclusions and the basis of such conclusions regarding the accounting policies for the CFS. Treasury does not have adequate procedures for verifying staff’s preparation of the narrative within the notes to the CFS to reasonably assure that the narrative is accurate and supported by the underlying financial information of the significant component entities. Accompanying notes are an integral part of financial statements and provide additional disclosures that are necessary to make the financial statements more informative. Such additional disclosures often take the form of narrative and amounts, presented both within the narrative and in tables. During our audit of the fiscal year 2013 CFS, we found that Treasury’s standard operating procedures for preparing the CFS, including its procedures for analyzing the financial information submitted by the significant component entities, provide that Treasury staff are to verify the narrative within the notes and make changes so that the narrative reflects the financial information submitted by the significant component entities. However, the procedures do not include specific steps to be performed to verify the accuracy of the staff’s work, such as verifying the appropriateness of changes made by the staff and that all necessary changes were identified and made. We identified inconsistencies and errors in the narrative within the notes to the draft CFS that were not identified through the review of the staff’s work but were subsequently corrected by Treasury. Standards for Internal Control in the Federal Government provides that control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives. The standards also provide that agencies should accurately and timely record transactions and events. Without adequate procedures for verifying staff’s preparation of the narrative within the notes to the CFS, Treasury cannot be assured that such narrative is reliable. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to improve and implement Treasury’s procedures for verifying that staff’s preparation of the narrative within the notes to the CFS is accurate and supported by the underlying financial information of the significant component entities. During our fiscal year 2013 CFS audit, we found that the federal government continued to be unable to adequately account for and reconcile intragovernmental activity and balances between federal entities. Treasury has taken certain actions over the past few years to address control deficiencies in this area. Consequently, we have updated the description of the remaining control deficiencies related to this material weakness, as discussed below. Consolidated financial statements are intended to present the results of operations and financial position of the components that make up the reporting entity as if the entity were a single enterprise. When preparing the consolidated financial statements, intragovernmental activity and balances between federal entities should be in agreement and must be subtracted out, or eliminated, from the financial statements. If the two federal entities engaged in an intragovernmental transaction do not both record the same intragovernmental transaction in the same year and for the same amount, the intragovernmental transactions will not be in agreement, resulting in errors in the consolidated financial statements. Federal entities are responsible for properly accounting for and reporting their intragovernmental activity and balances in their entity financial statements and for effectively implementing related internal controls. This includes reconciling and resolving intragovernmental differences at the transaction level with their trading partners. To support this process, Treasury has established certain controls to monitor whether intragovernmental activity and balances reported to Treasury by federal entities are properly reconciled and balanced at the account and line item levels. For example, based on intragovernmental information provided by the significant component entities, Treasury provides quarterly scorecards that highlight intragovernmental differences at the account level requiring the entities’ attention. As part of this process, Treasury gathers information from the entities on such differences to identify causes of intragovernmental differences and monitors how the differences are addressed by federal entities. Treasury also reviews disputes between federal entities and their respective trading partners when the entities cannot resolve an intragovernmental difference. After reviewing a dispute, Treasury issues a decision on how to resolve the difference. In fiscal year 2013, Treasury continued to actively work with federal entities, resulting in a significant number of intragovernmental differences being identified and resolved. While such progress was made, we continued to note that amounts reported by federal entity trading partners were not in agreement by significant amounts. Reasons for the differences cited by several chief financial officers included differing accounting methodologies, accounting errors, and timing differences. Treasury’s process focuses on the cause and resolution of individual differences, not on systemic “root causes” of differences. As such, the root causes of the differences are not fully identified and reported, which affects (1) the entities’ ability to resolve and prevent further differences and (2) Treasury’s ability to monitor how the entities are addressing the root causes. Furthermore, Treasury’s scorecard process generates scorecards for the significant component entities and focuses on the most significant reconciliation issues faced by each entity receiving a scorecard. However, Treasury has not demonstrated that differences not covered by the scorecard process are immaterial to the CFS. While certain intragovernmental transactions and balances related to the General Fund of the U.S. government (General Fund) are reported in Treasury’s audited department-level financial statements, there are significant intragovernmental transactions and balances related to the General Fund that are not currently accounted for and reported in Treasury’s audited department-level financial statements or in separate audited financial statements. Treasury is in the process of reviewing and developing accounting and reporting mechanisms for the General Fund. However, Treasury does not have policies and procedures for all significant intragovernmental activity and balances related to the General Fund to be accounted for and reported in financial statements, reconciled, and subjected to an appropriate level of assurance, including internal controls and audit. Consequently, there were unreconciled transactions between the records of the General Fund and federal entity trading partners related to appropriations and other intragovernmental transactions, which amounted to hundreds of billions of dollars. Auditors for the significant component entities are responsible for providing opinions on the entities’ overall department-level financial statements and on the entities’ closing package financial statements taken as a whole, which include intragovernmental activity and balances between federal entities. However, a formalized process has not been established to perform additional audit procedures specifically focused on intragovernmental activity and balances between federal entities. This process would include finalizing the procedures to be performed and establishing the criteria for determining which federal entities would be required to perform the procedures. Such a process would provide increased audit assurance over the reliability of such information and help address the above-noted significant unreconciled transactions at the government-wide level. Standards for Internal Control in the Federal Government provides that control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives. The standards also provide that agencies should accurately and timely record transactions. Without adequate procedures to fully account for and reconcile intragovernmental activity and balances, the federal government is unable to determine the impact of unreconciled transactions on the amounts reported in the CFS and Treasury’s ability to fully eliminate such amounts from the CFS is impaired. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to continue to build on the procedures in place to effectively identify systemic root causes of intragovernmental differences and monitor how federal entities are addressing the root causes; expand the scorecard process to include intragovernmental activity and balances that are currently not covered by the process or demonstrate that such information is immaterial to the CFS; establish and implement policies and procedures for accounting for and reporting all significant General Fund activity and balances, obtaining assurance on the reliability of the amounts, and reconciling the activity and balances between the General Fund and federal entities; and establish a formalized process to require the performance of additional audit procedures specifically focused on intragovernmental activity and balances between federal entities to provide increased audit assurance over the reliability of such information. Of our 37 recommendations from our prior reports regarding control deficiencies in the processes used to prepare the CFS that were open at the end of the fiscal year 2012 audit, we closed 13 recommendations during our fiscal year 2013 audit. Of the 13 closed recommendations, we closed 7 as a result of corrective actions taken by Treasury and OMB. We closed the other 6 recommendations, which were related to intragovernmental activity and balances, by making 4 new recommendations in this report that are better aligned with the current status of the remaining internal control deficiencies associated with the federal government’s inability to adequately account for and reconcile intragovernmental activity and balances between federal entities and reflect certain actions taken by Treasury. Twenty-four recommendations from our prior reports remained open as of February 19, 2014, the date of our report on the audit of the fiscal year 2013 CFS. Appendix I summarizes the status as of February 19, 2014, of the 37 open recommendations from our prior years’ reports. Specifically, appendix I includes the status according to Treasury and OMB, as well as our own assessments where appropriate. The status of recommendations per GAO includes explanatory comments on Treasury’s and OMB’s information. We will continue to monitor Treasury’s and OMB’s progress in addressing our recommendations as part of our fiscal year 2014 CFS audit. In oral comments on a draft of this report, OMB generally concurred with the findings and recommendations in this report. In written comments on a draft of this report, which are reprinted in appendix II, Treasury concurred with our seven new recommendations. To address the long-standing material weakness related to intragovernmental transactions, Treasury stated that it will expand the scorecard program to include transactions with the General Fund using a phased approach during fiscal year 2014. As it relates to the material weakness related to the compilation process, Treasury stated that it will be implementing Phase II of an automated tool to streamline this process, expanding its analysis performed on third quarter reporting, and collaborating with key personnel at the federal entities. In addition, Treasury noted that it is continuing its efforts to fully implement the process to reconcile the budget deficit to federal entities’ audited financial statements. This report contains recommendations to you. The head of a federal agency is required by 31 U.S.C. § 720 to submit a written statement on actions taken on our recommendations to the Senate Committee on Homeland Security and Governmental Affairs and to the House Committee on Oversight and Government Reform not later than 60 days after the date of this report. A written statement must also be sent to the Senate and House Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. Please provide me with a copy of your responses. We are sending copies of this report to interested congressional committees, the Fiscal Assistant Secretary of the Treasury, and the Interim Controller of the Office of Management and Budget’s Office of Federal Financial Management. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. We acknowledge and appreciate the cooperation and assistance provided by Treasury and OMB during our audit. If you or your staff have any questions or wish to discuss this report, please contact me at (202) 512-3406 or engelg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. No. As the Department of the Treasury (Treasury) is designing its new financial statement compilation process to begin with the fiscal year 2004 consolidated financial statements of the U.S. government (CFS), the Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of the Office of Management and Budget (OMB), to develop reconciliation procedures that will aid in understanding and controlling the net position balance as well as eliminate the plugs previously associated with compiling the CFS. Treasury has taken certain actions regarding accounting for and reconciling intragovernmental activity and balances to address GAO’s past recommendations. Treasury is committed to continuing its efforts to address the remaining control deficiencies in this area. Closed. Over the past few years, Treasury has taken certain actions to address recommendations related to intragovernmental activity and balances. To provide recommendations that are better aligned with the remaining internal control deficiencies in this area, we have (1) closed this recommendation and (2) included in this report under “Intragovernmental Activity and Balances” new recommendations for corrective action. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to design procedures that will account for the difference in intragovernmental assets and liabilities throughout the compilation process by means of formal consolidating and elimination accounting entries. See the status of recommendation No. 02-4. Closed. Over the past few years, Treasury has taken certain actions to address recommendations related to intragovernmental activity and balances. To provide recommendations that are better aligned with the remaining internal control deficiencies in this area, we have (1) closed this recommendation and (2) included in this report under “Intragovernmental Activity and Balances” new recommendations for corrective action. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop solutions for intragovernmental activity and balance issues relating to federal agencies’ accounting, reconciling, and reporting in areas other than those OMB now requires be reconciled, primarily areas relating to appropriations. Per Treasury and OMB See the status of recommendation No. 02-4. Per GAO Closed. Over the past few years, Treasury has taken certain actions to address recommendations related to intragovernmental activity and balances. To provide recommendations that are better aligned with the remaining internal control deficiencies in this area, we have (1) closed this recommendation and (2) included in this report under “Intragovernmental Activity and Balances” new recommendations for corrective action. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to reconcile the change in intragovernmental assets and liabilities for the fiscal year, including the amount and nature of all changes in intragovernmental assets or liabilities not attributable to cost and revenue activity recognized during the fiscal year. Examples of these differences would include capitalized purchases, such as inventory or equipment, and deferred revenue. See the status of recommendation No. 02-4. Closed. Over the past few years, Treasury has taken certain actions to address recommendations related to intragovernmental activity and balances. To provide recommendations that are better aligned with the remaining internal control deficiencies in this area, we have (1) closed this recommendation and (2) included in this report under “Intragovernmental Activity and Balances” new recommendations for corrective action. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to perform an assessment to define the reporting entity, including its specific components, in conformity with the criteria issued by the Federal Accounting Standards Advisory Board (FASAB). Key decisions made in this assessment should be documented, including the reason for including or excluding components and the basis for concluding on any issue. Particular emphasis should be placed on demonstrating that any financial information that should be included but is not included is immaterial. Per GAO Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to provide in the financial statements all the financial information relevant to the defined reporting entity, in all material respects. Such information would include, for example, the reporting entity’s assets, liabilities, and revenues. Per Treasury and OMB Treasury will enhance its documentation of the assessment of the reporting entity to ensure compliance with the FASAB criteria, including documenting the reason as to whether entities are to be included or excluded and the basis for that conclusion. As part of this assessment, Treasury will also take into account FASAB’s Reporting Entity standard, once it is issued. An analysis will be performed to demonstrate that the information that is not included but should be included is immaterial. Treasury conducted an analysis of the entity’s revenue and expenses on a cash basis that are not currently required by law to submit financial statements to verify the materiality of their data to the Financial Report of the United States Government (Financial Report). This analysis demonstrated that the amounts not included in the journal voucher prepared at year-end are immaterial to the Financial Report. Treasury will finalize this analysis to cover assets and liabilities. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to disclose in the financial statements all information that is necessary to inform users adequately about the reporting entity. Such disclosures should clearly describe the reporting entity and explain the reason for excluding any components that are not included in the defined reporting entity. Pending the results of actions taken pursuant to recommendation Nos. 02-22 and 02-23, Treasury will enhance the current disclosure in the Financial Report related to the reporting entity. Open. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to help ensure that federal agencies provide adequate information in their legal representation letters regarding the expected outcomes of the cases. Per Treasury and OMB Treasury will work with the Department of Justice, OMB, GAO and the agencies to determine if further changes in policy and/or guidance (e.g., OMB Circular No. A-136) are needed for all agencies to provide the required information regarding the expected outcomes of legal cases in their legal representations. Per GAO Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies develop a detailed schedule of all major treaties and other international agreements that obligate the U.S. government to provide cash, goods, or services, or that create other financial arrangements that are contingent on the occurrence or nonoccurrence of future events (a starting point for compiling these data could be the State Department’s Treaties in Force). Agencies are currently required to report contingencies in their financial statements and notes pursuant to generally accepted accounting principles (GAAP). In addition, OMB Circular No. A- 136 specifically references the inclusion of treaties and international agreements within “Commitments and Contingencies.” Further, agencies include specific representations with respect to material liabilities or contingencies in their management representations. In addition, the financial statements of most significant entities and many other federal entities received unqualified audit opinions. However, no additional analysis of treaties has been performed to reasonably ensure that all of the federal government’s treaties are considered in agency analysis or that agencies are consistently analyzing treaties for recognition or disclosure. Treasury will annually review agency financial statements, audit reports, and management representation letters for any references to treaties and international agreements, and if deemed material will disclose in the CFS. Open. Until a comprehensive analysis of major treaty and other international agreement information has been performed, Treasury and OMB are precluded from determining if additional disclosure is required by GAAP in the CFS, and we are precluded from determining whether the omitted information is material. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies classify all such scheduled major treaties and other international agreements as commitments or contingencies. Per Treasury and OMB See the status of recommendation No. 02-37. Per GAO Open. See the status of recommendation No. 02-37. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies disclose in the notes to the CFS amounts for major treaties and other international agreements that have a reasonably possible chance of resulting in a loss or claim as a contingency. See the status of recommendation No. 02-37. Open. See the status of recommendation No. 02-37. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies disclose in the notes to the CFS amounts for major treaties and other international agreements that are classified as commitments and that may require measurable future financial obligations. See the status of recommendation No. 02-37. Open. See the status of recommendation No. 02-37. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies take steps to prevent major treaties and other international agreements that are classified as remote from being recorded or disclosed as probable or reasonably possible in the CFS. Per Treasury and OMB See the status of recommendation No. 02-37. Per GAO Open. See the status of recommendation No. 02-37. 02-129 The Secretary of the Treasury should direct the Fiscal Assistant Secretary to ensure that the note disclosure for stewardship responsibilities related to the risk assumed for federal insurance and guarantee programs meets the requirements of Statement of Federal Financial Accounting Standards No. 5, Accounting for Liabilities of the Federal Government, paragraph 106, which requires that when financial information pursuant to Financial Accounting Standards Board standards on federal insurance and guarantee programs conducted by government corporations is incorporated in general purpose financial reports of a larger federal reporting entity, the entity should report as required supplementary information what amounts and periodic change in those amounts would be reported under the “risk assumed” approach. Treasury requests this information from the agencies in the Treasury Financial Manual 2-4700. Treasury will include this subject matter as a key area for the third quarter reporting requirements to allow sufficient time to complete this analysis. Open. Treasury’s reporting in this area is not complete. The CFS should include all major federal insurance programs in the risk assumed reporting and analysis. Also, since future events are uncertain, risk assumed information should include indicators of the range of uncertainty around expected estimates, including indicators of the sensitivity of the estimate to changes in major assumptions. The Director of OMB should direct the Controller of OMB, in coordination with Treasury’s Fiscal Assistant Secretary, to work with the Department of Justice and certain other executive branch federal agencies to ensure that these federal agencies report or disclose relevant criminal debt information in conformity with GAAP in their financial statements and have such information subjected to audit. Treasury and OMB will assess options as to what methodologies or approaches to use for obtaining the additional information needed from the agencies. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to include relevant criminal debt information in the CFS or document the specific rationale for excluding such information. See the status of recommendation No. 03-8. Open. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to modify Treasury’s plans for the new closing package to (1) require federal agencies to directly link their audited financial statement notes to the CFS notes and (2) provide the necessary information to demonstrate that all of the five principal consolidated financial statements are consistent with the underlying information in federal agencies’ audited financial statements and other financial data. Per Treasury and OMB Treasury has demonstrated that the three principal accrual- based consolidated financial statements are consistent with the significant federal entities’ financial statements prior to eliminating intragovernmental activity and balances. Treasury has taken certain actions regarding the two CFS budget statements to address GAO’s past recommendations. Treasury is committed to continuing its efforts to address the remaining control deficiencies in the area. Per GAO Closed. Treasury’s process for compiling the CFS generally demonstrated that amounts in the Statement of Social Insurance and the Statement of Changes in Social Insurance Amounts were consistent with the underlying federal entities’ audited financial statements and that the Balance Sheet, Statement of Net Cost, and Statement of Operations and Changes in Net Position were also consistent with the significant federal entities’ financial statements prior to eliminating intragovernmental activity and balances. With regard to directly linking the Reconciliation of Net Operating Cost and Unified Budget Deficit and Statement of Changes in Cash Balance from Unified Budget and Other Activities to federal entities’ audited financial statements and other financial data, we incorporated this issue along with other control deficiencies related to the two CFS budget statements into recommendations 12-04 and 12-05 below. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to require that Treasury employees contact and document communications with federal agencies before recording journal vouchers to change agency audited closing package data. Treasury attempted to contact agencies before recording journal vouchers and made management decisions on making journal vouchers that affect agency audited closing package data. In fiscal year 2014, Treasury will ensure that this agency communication along with the Treasury decision-making process will be fully documented in the support for the journal vouchers. Open. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary to assess the infrastructure associated with the compilation process and modify it as necessary to achieve a sound internal control environment. Per Treasury and OMB Treasury continues to make improvements to its internal control infrastructure, by updating and revising its standard operating procedures (SOP), ensuring key controls are in place at all critical areas of the CFS preparation process. Treasury will continue to monitor and assess its internal control during fiscal year 2014 toward achieving a sound internal control environment. Per GAO Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to establish effective processes and procedures to ensure that appropriate information regarding litigation and claims is included in the government-wide legal representation letter. Treasury, in coordination with OMB, will continue to work with the Department of Justice to include appropriate information regarding litigation and claims in the government-wide legal representation letter. In the interim, Treasury will use the information from the federal agencies’ legal representation letters for cases less than $500 million to analyze the materiality impact on the CFS. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to develop a process for obtaining sufficient information from federal agencies to enable Treasury and OMB to adequately monitor federal agencies’ efforts to reconcile intragovernmental activity and balances with their trading partners. This information should include (1) the nature and a detailed description of the significant differences that exist between trading partners’ records of intragovernmental activity and balances, (2) detailed reasons why such differences exist, (3) details of steps taken or being taken to work with federal agencies’ trading partners to resolve the differences, and (4) the potential outcome of such steps. See the status of recommendation No. 02-4. Closed. Over the past few years, Treasury has taken certain actions to address recommendations related to intragovernmental activity and balances. To provide recommendations that are better aligned with the remaining internal control deficiencies in this area, we have (1) closed this recommendation and (2) included in this report under “Intragovernmental Activity and Balances” new recommendations for corrective action. No. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB’s Office of Federal Financial Management, to develop and implement effective processes for monitoring and assessing the effectiveness of internal control over the processes used to prepare the CFS. See the status of recommendation No. 04-6. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to develop and implement alternative solutions to performing almost all of the compilation effort at the end of the year, including obtaining and utilizing interim financial information from federal agencies. Treasury has obtained and utilized third quarter financial information from agencies, focusing on key topics and issues to compile a major part of the CFS for fiscal years 2012 and 2013. In fiscal year 2014, Treasury will increase the number of key topics and will identify subject matters that can be completed before fiscal year- end. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to design, document, and implement policies and procedures to identify and eliminate intragovernmental payroll tax amounts at the government-wide level when compiling the CFS. Treasury implemented a quarterly process to identify and eliminate intragovernmental payroll tax amounts at the government-wide level. Closed. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to enhance the SOP entitled “Statement of Social Insurance, Social Insurance Note, and Required Supplementary Information” to implement and document procedures for assuring the accuracy of staff’s work related to preparing the social insurance information for the CFS. Treasury enhanced the SOP entitled “Statement of Social Insurance, Social Insurance Note, and Required Supplementary Information” to ensure proper internal controls were followed in preparing the social insurance information. Closed. No. The Acting Director of OMB should direct the Controller of OMB to develop and implement written procedures specifying the steps required for effectively reviewing and approving the drafts of the Financial Report before they are provided to GAO, to include clear delineation of the review and approval roles and responsibilities of designated appropriate higher-level officials in OMB’s Office of Federal Financial Management, including the Controller of OMB. OMB developed and implemented written procedures specifying the steps for reviewing and approving the drafts of the Financial Report. Closed. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to develop and implement procedures to provide for the active involvement of key federal entity personnel with technical expertise in relatively new areas and more complex areas in the preparation and review process of the Financial Report. Treasury has involved key federal entity personnel with technical expertise during the interim and year-end review and preparation of the Financial Report. As Treasury increases the key topics for the third quarter analysis in fiscal year 2014, the key federal entity personnel and the Treasury employees will work collaboratively from third quarter analysis until the Financial Report publication on these key topics. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to revise the SOP to include requirements for using the CFS disclosure checklist to prepare the format draft of the CFS and to update the CFS disclosure checklist as necessary when subsequent drafts of the CFS are prepared. Treasury prepared the CFS disclosure checklist for the generation of the format draft of the CFS and ensured that all necessary updates were made to the checklist. Closed. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary to establish a mechanism for ensuring that all steps in the required validation process are completed, documented, and reviewed prior to the distribution of Intragovernmental Reporting and Analysis System (IRAS) reports. Per Treasury and OMB See the status of recommendation No. 02-4. Per GAO Closed. Over the past few years, Treasury has taken certain actions to address recommendations related to intragovernmental activity and balances. To provide recommendations that are better aligned with the remaining internal control deficiencies in this area, we have (1) closed this recommendation and (2) included in this report under “Intragovernmental Activity and Balances” new recommendations for corrective action. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to enhance the SOP entitled “Significant Federal Entities Identification” to include procedures for identifying any entities that become significant to the Financial Report during the fiscal year but were not identified as significant in the prior fiscal year. Treasury will implement a new approach in fiscal year 2014 for identifying new entities that become significant to the Financial Report during the fiscal year. This approach will include an analysis performed using data from the Central Accounting Reporting system (CARS), Governmentwide Treasury Account Symbol Adjusted Trial Balance System (GTAS), and Governmentwide Financial Report System (GFRS) throughout the fiscal year. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to enhance the SOP entitled “Significant Federal Entities Identification” to include procedures for obtaining audited closing packages from newly identified significant entities in the year they become significant, including timely written notification to newly identified significant entities. Treasury will update the SOP entitled “Significant Federal Entities Identification” to include procedures as determined in the approach as a result of recommendation No. 11-09. Open. No. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to enhance the SOP entitled “Significant Federal Entities Identification” to include procedures for identifying any material line items for significant calendar year-end entities that become material to the CFS during the current fiscal year but were not identified as material in the analysis using prior year financial information. Treasury will implement a new approach in fiscal year 2014 for identifying new material line items for significant calendar year entities that become significant to the Financial Report during the fiscal year. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to enhance the SOP entitled “Significant Federal Entities Identification” to include procedures for obtaining audit assurance over identified material line items for significant calendar year-end entities in the year they become material. Treasury will update the SOP entitled “Significant Federal Entities Identification” to include procedures as determined in the approach as a result of recommendation No. 12-01. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to enhance the SOP entitled “Prior Period Adjustments” to include procedures for analyzing and assessing the effects of significant federal entities’ restatements and reclassifications on related line items and notes presented in the CFS. Treasury updated the SOP entitled “Prior Period Adjustments” to include procedures for analyzing and assessing restatements and reclassifications of significant federal entities. Closed. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to establish and implement effective procedures for reporting amounts in the CFS budget statements that are fully consistent with the underlying information in significant federal entities’ audited financial statements and other financial data. Treasury has started to improve the documentation to show the consistency of budget statement information in the CFS with the federal entities’ audited financial statements. Treasury will leverage the reconciliation information requested from and provided by the agencies in 2013 to improve the analysis in 2014 by supplementing that information with additional crosswalks and support to be provided by the agencies and verified by Treasury. Open. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to establish and implement effective procedures for identifying and reporting all items needed to prepare the CFS budget statements. Per Treasury and OMB Treasury will develop a crosswalk to ensure that all data is included in the preparation of the budget statements. The implementation of the budget deficit reconciliation in fiscal year 2013 will be leveraged as support for the financial statement lines on the budget statements. Per GAO Open. The Director of OMB should ensure that OMB continues its efforts to remove the requirement for reporting “overall substantial compliance” from OMB Circular No. A-136, Financial Reporting Requirements. OMB revised OMB Circular No. A-136. Closed. The status of the recommendations listed in app. I is as of February 19, 2014, the date of our report on the audit of the fiscal year 2013 CFS. | Treasury, in coordination with OMB, prepares the Financial Report of the United States Government , which contains the CFS. Since GAO's first audit of the fiscal year 1997 CFS, certain material weaknesses and other limitations on the scope of its work have prevented GAO from expressing an opinion on the accrual-based CFS. As part of the fiscal year 2013 CFS audit, GAO identified material weaknesses and other control deficiencies in the processes used to prepare the CFS. The purpose of this report is to (1) provide details on the control deficiencies GAO identified related to the processes used to prepare the CFS, (2) recommend improvements, and (3) provide the status of corrective actions taken by Treasury and OMB to address GAO's prior recommendations relating to the processes used to prepare the CFS that remained open at the end of the fiscal year 2012 audit. During its audit of the fiscal year 2013 consolidated financial statements of the U.S. government (CFS), GAO identified control deficiencies in the Department of the Treasury's (Treasury) and the Office of Management and Budget's (OMB) processes used to prepare the CFS. These control deficiencies contributed to material weaknesses in internal control over the federal government's ability to adequately account for and reconcile intragovernmental activity and balances between federal entities; ensure that the federal government's accrual-based consolidated financial statements were (1) consistent with the underlying audited entities' financial statements, (2) properly balanced, and (3) in conformity with U.S. generally accepted accounting principles; and ensure the consistency of (1) information used by Treasury to compute the budget deficit reported in the consolidated financial statements, (2) Treasury's records of cash transactions, and (3) information reported in federal entity financial statements and underlying financial information and records. Specifically, for fiscal year 2013, GAO found that Treasury's and OMB's corrective action plans are not adequate to reasonably assure that internal control deficiencies involving the processes used to prepare the CFS are efficiently and effectively addressed, Treasury does not have procedures to sufficiently document management's conclusions and the basis for such conclusions regarding the accounting policies for the CFS, and Treasury does not have adequate procedures for verifying staff's preparation of the narrative within the notes to the CFS to reasonably assure that the narrative is accurate and supported by the underlying financial information of the significant component entities. GAO also updated the description of the control deficiencies related to the long-standing material weakness regarding the federal government's inability to adequately account for and reconcile intragovernmental activity and balances between federal entities. GAO closed 6 recommendations from prior GAO reports and made 4 new recommendations that are better aligned with the remaining internal control deficiencies in this area. In addition, GAO found that various other control deficiencies identified in previous years' audits with respect to the processes used to prepare the CFS continued to exist. Specifically, 24 of the 37 recommendations from GAO's prior reports regarding control deficiencies in the processes used to prepare the CFS remained open as of February 19, 2014, the date of GAO's report on its audit of the fiscal year 2013 CFS. GAO will continue to monitor the status of corrective actions taken to address the 7 new recommendations as well as the 24 open recommendations from prior years as part of its fiscal year 2014 CFS audit. GAO is making seven new recommendations—five to both Treasury and OMB and two to Treasury—to address the control deficiencies identified by GAO during the fiscal year 2013 CFS audit. In commenting on GAO's draft, Treasury and OMB generally concurred with GAO's recommendations. |
The federal Food Stamp Program is intended to help low-income individuals and families obtain a better diet by supplementing their income with benefits to purchase food. FNS pays the full cost of food stamp benefits and shares the states’ administrative cost—with FNS paying about 50 percent of the administrative cost. FNS is responsible for promulgating program regulations and ensuring that state officials administer the program in compliance with program rules. The states administer the program by determining whether households meet the program’s income and asset requirements, calculating monthly benefits for qualified households, and issuing benefits to participants on an electronic benefits transfer card. Eligibility for participation in the Food Stamp Program is based on the Department of Health and Human Services’ poverty guideline for households. In most states, a household’s gross income cannot exceed 130 percent of the federal poverty level (or about $1,654 per month for a family of three in 2003), and net income cannot exceed 100 percent of the poverty guideline (or about $1,272 per month for a family of three in 2003). Net income is determined by deducting from gross income expenses such as dependent care costs, medical expenses, utilities costs, and shelter expenses. In addition, most states place a limit of $2,000 on household assets, and basic program rules limit the value of vehicles an applicant can own and still be eligible for the program. If the household owns a vehicle worth more than $4,650, the excess value is included in calculating the household’s assets. Recipients of TANF cash assistance are automatically eligible for food stamps—a provision referred to as “categorical eligibility”—and do not have to go through a separate food stamp eligibility determination process, although the level of their benefits must still be determined. Many needy families who are no longer receiving TANF cash assistance may receive other TANF-funded services or benefits, such as child care benefits. In 1999, to help ensure that these families are also eligible for food stamp benefits, FNS offered states the option to extend categorical eligibility to families receiving TANF-funded benefits or services. Families who are automatically eligible for food stamps do not have to meet the food stamp asset test in order to receive benefits but would have to meet the state’s TANF asset test. States also have two ways in which they can allow households to own a vehicle that is worth more than the amount allowed in current regulations and still remain eligible for food stamp benefits. In October 2000, in part to help support low-income working families, the Congress enacted legislation that grants states the option to replace the federal food stamp vehicle asset rule with the vehicle asset rule from their TANF assistance program, which is set by the state and can vary from state to state. States can also opt to use the categorical eligibility option as a way to exclude all vehicles, as well as other assets the family may have. This option affects the food stamp eligibility only of food stamp families authorized to receive a TANF-funded service or benefit. As of October 2003, the majority of states had either replaced their federal food stamp vehicle asset rule with the vehicle asset rule from their TANF assistance program or conferred categorical eligibility as a way to exclude vehicles. After eligibility is established, households are certified eligible for food stamps for periods ranging from 1 to 24 months. The length of the certification period depends on household circumstances, but only households in which all members are elderly or disabled can be certified for more than 12 months. Once the certification period ends, households must reapply for benefits, at which time eligibility and benefit levels are redetermined. Between certification periods, households must report changes in their circumstances—such as household composition, income, and expenses— that may affect their eligibility or benefit amounts. States have the option of requiring food stamp participants to report on their financial circumstances at various intervals and in various ways. States can institute a type of periodic reporting system or they can rely on households to report changes in their household circumstances within 10 days of occurrence. Under periodic reporting, participants may report monthly, quarterly, or under a simplified system. The simplified reporting system, available since early 2001, provides for an alternative reporting option that requires households with earned income to report changes only when their income rises above 130 percent of the poverty level. FNS monitors how accurately states determine food stamp eligibility and calculate benefits. Under FNS’s quality control system, the states calculate their payment errors by drawing a statistical sample to determine whether participating households received the correct benefit amount. Improper payments, which include overpayments of food stamp benefits to participants, underpayments to participants, and payments to those who are not eligible, may occur for a variety of reasons. Overpayments can be caused by inadvertent or intentional errors made by recipients and caseworkers. For example, caseworkers may misapply complex food stamp rules when calculating benefits or participants may inadvertently or deliberately provide inaccurate information to food stamp offices. In the 1990s, the states’ error rate hovered around 10 percent, but it fell to 6.6 percent in fiscal year 2003, the lowest level in the program’s history. The 2003 combined error rate comprised $1 billion in overpayments to food stamp participants and underpayments of more than $300 million. According to USDA, about half of all payment errors are due to an incorrect determination of household income. The Farm Bill changed the Food Stamp Program’s quality control system by making only those states with persistently high error rates face liabilities. The Farm Bill also provides for $48 million in bonuses each year to be awarded to states with high or most improved performance, including actions taken to correct errors, reduce error rates, improve eligibility determinations, and other indicators of effective administration as approved by the Secretary of Agriculture. Many food stamp participants receive benefits from other federally funded low-income assistance programs, including Medicaid and TANF. For example, in 2002, about 85 percent of children who received food stamp benefits were also on Medicaid, and about 20 percent of food stamp households received assistance from TANF. Many food stamp participants also receive child care assistance and Supplemental Security Income. In most states, the Food Stamp Program is administered out of a local assistance office that offers benefits from these other assistance programs as well. Food stamp participants may provide necessary information to only one caseworker who determines eligibility and benefits for all of these programs, or they may work with several caseworkers that administer benefits for different programs. Despite the overlap in the populations served by these various assistance programs, program rules and requirements across these programs vary significantly. Substantial variation exists not only in program financial eligibility rules. The primary sources of these variations are generally at the federal level, although for several programs, such as TANF and Medicaid, states and localities have some flexibility in setting financial eligibility rules. They also have flexibility in the rules that govern how often participants are required to report changes in their household circumstances. While the Food Stamp Program allows states to choose either periodic or change reporting, Medicaid provides states with even broader flexibility to establish rules for when Medicaid participants must report changes in their circumstances. Under Medicaid regulations, states must have procedures designed to ensure that participants make timely and accurate reports of any change in circumstances that may affect their eligibility and that states act promptly to redetermine eligibility based on the reported change in circumstances. However, the terms “timely” and “promptly” are not defined and can be interpreted in various ways by the states. TANF does not mandate a particular set of participant reporting rules and generally allows states to develop their own rules. The Farm Bill makes available to states various new options that are intended to simplify food stamp program rules, streamline food stamp eligibility and benefit rules, and help ensure that food stamp participants experience as smooth a transition from welfare to work as possible. (See fig. 1.) States chose four of the Farm Bill options with greater frequency than the others. State officials gave reasons for choosing, or not choosing, the options that focused primarily on how they thought the options would affect food stamp participants and caseworkers. Other reasons were also important in the choice of some options. For example, the anticipated effect on the state’s payment error rate was a key factor in the selection of most options. During the period when states were implementing the food stamp options, a number of them posed challenges for the states, such as difficulties related to caseworkers’ adjustment to program changes and programming computer systems. According to our survey of state food stamp administrators, 23 or more states had implemented four of the options as of January 2004, while less than one-quarter of the states had implemented the other four options. (See fig. 2. Also see app. II for the options that individual states have chosen and implemented.) The most common reasons state officials gave for choosing the eight options were to simplify program rules for participants and caseworkers, according to our survey. For example, state officials we interviewed told us they thought program rules would be less confusing for participants if the types of income considered in eligibility determinations were more uniform across assistance programs, as is allowed by the Simplified Definition of Income option. In addition, officials in one state commented that they thought the Simplified Standard Utility Allowance option would make the rules less complicated for caseworkers because it would allow them to apply the standard utility allowance—a fixed amount that can be used in place of actual utility costs—to households sharing a residence, instead of having to prorate the actual utility costs of the household. (See fig. 3 and app. III for more detail on reasons states chose options.) In addition, two important reasons state officials gave for choosing options were to decrease the burden on participants and decrease the workload for caseworkers, as shown in figure 3. For example, several state officials told us they thought options such as Simplified Homeless Shelter Costs and Simplified Standard Utility Allowance that allow states to use a standard allowance rather than actual costs in determining eligibility would provide relief for participants and caseworkers. When standard allowances are used, participants do not have to furnish proof of all actual costs and, correspondingly, caseworkers have less information to verify. In addition, some state officials told us that they thought an option, such as Transitional Benefits, that decreases the frequency with which participants must report changes would reduce workload. Under the Transitional Benefits option, households leaving TANF are automatically allowed up to 5 months of food stamp benefits and are not required to report changes in household circumstances during the transitional period. Other reasons were also important in the choice of some options. Lowering their state’s payment error rate was an important reason state officials gave for choosing seven of the options, including the Expanded Simplified Reporting option. States choosing this option are held responsible only for errors that result from miscalculating benefits at certification, or if income exceeds 130 percent of poverty and the change is not reported during the reporting period. A state’s error rate is also not affected if the household experienced a change in its circumstances that it did not report. In addition, officials in one state told us they thought the Transitional Benefits option would lower the state’s payment error rate because it allows for certain periods in which states are to be held harmless for unreported changes. Otherwise, these unreported changes could be included in calculating the error rate. Further, officials told us that the income option would make the Food Stamp Program less error prone because it allows states to use some of the same income definitions that are used when determining eligibility for TANF cash assistance or Medicaid. This alignment of income definitions may result in fewer errors because following one set of program rules is easier for participants and caseworkers than trying to adhere to many different sets of rules. Increasing participation in the Food Stamp Program, including participation of working families, was also an important reason for choosing three of the options. For example, officials in one state told us that they believed the Expanded Simplified Reporting option would contribute to higher participation rates because cases would not be closed as often under this option. In addition, state officials reported that they thought the Child Support Expense Income Exclusion option would help more households to receive food stamps by making it easier for them to meet eligibility requirements. This option allows states to exclude legally obligated child support payments from the gross income of the noncustodial parent who is paying the child support when determining food stamp eligibility. Without the option, these child support payments are deducted from the noncustodial parent’s income after eligibility for food stamps is already determined. State officials we surveyed gave additional reasons for choosing some options, including the desire to align food stamps with other assistance programs, increase benefit amounts for participants, and encourage payment of child support. Aligning the Food Stamp Program’s definition of income and resource rules with those used by TANF or Medicaid—that is, conforming the definitions of income and resources states use in the Food Stamp Program to the definitions they use in their TANF or Medicaid program—was an important reason for choosing the income and resources options. Increasing benefit amounts for participants was an important reason for choosing some options, including Transitional Benefits. Officials in one state told us they thought this option would result in greater benefit amounts for households leaving TANF because the new income that rendered them ineligible for TANF is not included in the calculation of their benefit amount for the transitional period. If this additional income were taken into account, it would most likely result in a lower benefit amount. Finally, among other reasons, state officials chose the Child Support Expense Income Exclusion option because they thought it would encourage payment of child support. State officials gave a number of reasons for not choosing options. Among the most common was their belief that the option would complicate rules for participants and complicate rules for caseworkers in their state. Because of the variability among states in the design of Food Stamp Programs and other assistance programs, an option that simplified processes in one state may have a different effect in other states. For example, officials in two states commented that they thought the Simplified Determination of Deductions option might confuse participants and caseworkers because it would create additional, and sometimes conflicting, participant reporting rules in their state. As one state official noted, this option, which allows states to disregard reported changes in certain deductions during the certification period, could be confusing for caseworkers because of its inconsistency with her state’s policy to act on all reported changes. (See fig. 4. Also see app. III for more detail on reasons states did not choose options.) State officials gave additional reasons for not choosing some of the options. An important reason for not choosing two of the options was that officials believed the options would result in little or no increase in the amount of food stamp benefits for participants in their state. For example, officials in several states noted that according to their calculations, implementing the child support option would not increase food stamp benefit amounts for participants in their state who pay child support. In addition, some state officials commented that they did not choose the deductions option, which allows states to disregard reported changes in certain deductions during the certification period, because they believed the option could prevent participants from receiving additional benefits if their expenses increased during this period. State officials also reported an important reason they did not choose three of the options was because of their belief that the options would affect very few participants in their state. For example, some state officials reported that the number of households that would be helped by the Transitional Benefits option in their state would be relatively small because their state had implemented simplified reporting systems that provided similar advantages, such as allowing households to forgo reporting most changes between scheduled reporting periods. Similarly, officials in one state commented that they thought the child support option would not increase the number of eligible households in their state because many of the affected households would already be categorically eligible for food stamps. In addition, an important reason state officials gave for not choosing five of the options was that they thought the options would have little or no advantage over current policy in their state. For example, officials in some states commented that the income and resources options would not allow them much additional flexibility in their Food Stamp Program definitions because FNS placed restrictions on the types of incomes and resources that could be excluded under these options, while one other state official noted that before these options became available, they had already largely aligned TANF and Medicaid definitions of resources with those used by the Food Stamp Program. In addition, other state officials told us the deductions option would be duplicative in their state because they had already implemented simplified reporting options that exempt participating households from reporting changes in the deductions covered by this option during the certification period. Other reasons were important for not choosing some options, including a possible increase in the state’s payment error rate or the difficulty in programming the state’s computer system to implement the change. Officials in some states said they thought the Expanded Simplified Reporting option might increase the payment error rate in their state. Simplified reporting systems reduce the frequency with which households must report changes, which may make the reporting rules of food stamps different from those of other assistance programs in the state that require households to report changes on a more regular basis. These differences in reporting rules could lead to errors by participants and caseworkers, who often determine eligibility for more than one assistance program. In addition, some state officials reported that they did not choose the Transitional Benefits option because the required changes would be too difficult to program into their state’s computer systems. Food stamp computer systems in many states are integrated with other assistance programs, such as TANF and Medicaid. In states that did choose specific options, a number of these options posed challenges for the states during initial implementation. Reported challenges included difficulties related to caseworkers’ adjustment to program changes, lack of alignment with other assistance programs, and programming state computer systems. For example, officials in one state told us caseworkers had trouble adjusting to the new reporting system under Simplified Determination of Deductions because many were accustomed to the former system in which participants reported, and caseworkers acted on, changes in some household deductions within 10 days of the change. In addition, state officials told us some options, such as Expanded Simplified Reporting, lessened the degree to which Food Stamp Program rules aligned with those of other assistance programs, which also presented challenges. Food stamp officials in one state told us they selected the Expanded Simplified Reporting option even though they knew it was going to result in food stamp reporting rules that were different from those of another assistance program because they thought the option would have many benefits for participants. Finally, difficulties with programming computers were commonly mentioned challenges to implementation. We heard from officials in two states that had implemented the Transitional Benefits option that this integration posed difficulties for them. These officials reported that they had to delink the connection with other programs so that the food stamp benefit remained frozen during the 5-month transitional period, regardless of the information recorded in the computer system for the other assistance programs. Officials from seven states provided cost estimates for implementing the options. The cost estimates ranged from $14,880 to $3.7 million, almost all of which, in six of the states, represented the costs of changing the state’s computer system. These estimates included costs for such expenses as programming and testing the computer systems. Other states did not provide estimates for the costs of implementing the options. Local food stamp officials, who often have day-to-day contact with frontline caseworkers and food stamp participants, reported mixed results from implementing the Farm Bill options; the results ranged from improvements to complications. They reported that most of the options achieved at least some of the improvements anticipated by state officials. However, in a number of cases, local officials reported that the options did not result in expected improvements, or their opinions differed on whether the option achieved the anticipated result. Finally, local officials reported that three options introduced complications in program rules for both caseworkers and participants. Local food stamp officials reported on our survey that the options resulted in some, but not all, of the improvements anticipated by state officials. The officials’ views were mixed on whether the administrative burden was reduced for program participants and caseworkers. For example, many local officials reported that the options reduced paperwork for participants. However, officials were less likely to report that the options reduced the actual time participants spent applying for food stamps or reporting changes in household circumstances. In addition, some local officials reported that participation increased as a result of implementing options intended to increase participation, while others told us that those options had no effect on participation. Similarly, for the two options expected to increase alignment of program definitions with TANF and Medicaid, most officials agreed that these options made the definitions of income and resources the same as in TANF, but officials’ opinions differed on whether the options helped increase alignment with Medicaid. Local food stamp supervisors reported mixed results on whether the options eased the administrative burden on participants—a primary reason that states chose most of these options—as measured by both the amount of paperwork required and the time spent applying for food stamps and reporting changes. These local officials reported on our survey that the Expanded Simplified Reporting option eased the administrative burden on participants, particularly those who do not receive benefits from other assistance programs, by decreasing the time needed to prepare paperwork and report changes in their household circumstances. (See fig. 5.) For five other options, local officials differed in their views; some reported that the administrative burden on participants decreased while others reported no change. These five options are Simplified Standard Utility Allowance, Simplified Definition of Income, Simplified Definition of Resources, Transitional Benefits, and Simplified Determination of Deductions. For example, about the same number of local officials reported that the Transitional Benefits option decreased the administrative burden on participants as reported that it remained the same. Further, most local officials from states that adopted the deductions option reported that the administrative burden under this option remained the same for participants. This may be because three of the four states that implemented this option also implemented Expanded Simplified Reporting, which already decreased the administrative burden for participants in a similar way. Although five of the Farm Bill options—Expanded Simplified Reporting, Simplified Standard Utility Allowance, Simplified Definition of Income, Simplified Definition of Resources, and Simplified Determination of Deductions—were chosen by state officials to ease the administrative burden on caseworkers, local officials reported that most of these options had little effect on reducing the administrative burden on the caseworkers. (See fig. 5.) Overall, local officials reported no effect on the number of contacts with participants and time spent with participants during those contacts. However, local officials reported some reduction in the time spent on paperwork. For example, local officials told us that the utility option reduced the amount of time caseworkers spent on paperwork because they no longer had to conduct an additional complicated procedure to determine the correct benefit amount for certain participants. (See app. IV for additional details from our surveys regarding how Farm Bill options affected participants and caseworkers.) Similarly, for options implemented in part to achieve other goals—to decrease payment error rate, increase program participation, and increase benefit amount—some local officials reported improvements, while others told us that the options had no effect. (See fig. 6.) Although about one- quarter of local officials reported that they did not know how most Farm Bill options affected their payment error rates, some others attributed improvements in error rates to two options. About half of the local officials that responded said that Expanded Simplified Reporting and the utility option decreased the error rate, and the other half reported that the error rate remained the same. For options that state officials thought would increase program participation, local food stamp officials reported that the options had little effect on participation. For example, although state officials thought that the Child Support Expense Income Exclusion option would increase participation, local officials reported that it did not. There was no consensus on whether the other two options chosen to increase participation—Expanded Simplified Reporting and Transitional Benefits—resulted in increased participation. For options that state officials thought would increase food stamp benefit amounts, some locals reported improvements, while others reported no change. Specifically, local officials reported that the utility option increased benefit amounts for participants, while about half reported that the income option increased benefits and about half reported it did not. Most local officials reported increased alignment of the definitions of income and resources between food stamps and TANF from the income and resources options. States selected these two options in part to increase alignment by making these definitions the same in their Food Stamp Program and TANF. For example, local food stamp officials from one state we visited told us that aligning the definition of income under the income option eliminated a food stamp form that was not required for TANF or Medicaid. This form was used to verify loans from educational institutions, such as community colleges, regarding the amount and duration of the loan. They told us that obtaining this information from educational institutions could take a month and possibly require several follow-up contacts with the institution. This decrease in paperwork for both participants and some caseworkers demonstrates one benefit from increased alignment. However, on our survey, officials’ opinions differed on whether the income and resources options helped increase alignment of definitions between food stamps and Medicaid. Local officials may have reported little or no change from certain options because they affected relatively few food stamp participants or they did not affect caseworkers’ responsibilities. For example, most local officials reported that four options—income, resources, transitional benefits, and child support—affected less than 20 percent of their caseload. In addition, most options made only slight changes to caseworkers’ administrative processes, and others may not have affected their processes at all because some changes were automatically incorporated into state computer systems. For example, local officials reported that the child support option was automatically incorporated into state computer systems, so caseworkers’ responsibilities were not affected by this change. Several of the Farm Bill options made only slight changes to existing food stamp policy, such as the utility, child support, and Simplified Homeless Shelter Costs options. For example, the utility option expanded the existing standard utility allowance policy to cover two additional types of households that were previously excluded: households sharing a living space and public housing residents who were charged for only excess utility costs. Also, of the local officials we surveyed on the homeless option, the majority indicated that they implemented a similar policy prior to its availability under the Farm Bill. Many local officials reported on our surveys that three options—Expanded Simplified Reporting, Transitional Benefits, and Simplified Determination of Deductions—introduced complications in program rules for participants and caseworkers. Of these options, the Expanded Simplified Reporting option—an option that local officials told us affected most of their caseloads—introduced the most serious complications because of how it interacts with participant reporting rules for other assistance programs. Officials told us that adopting the Expanded Simplified Reporting option resulted in Food Stamp Program reporting rules that differed in important ways from the reporting rules of other assistance programs, such as Medicaid and TANF, depending on how their states have structured these programs. About one-third of local officials we surveyed reported that this option decreased alignment between Food Stamp Program reporting rules and those of Medicaid; about one-half reported a decrease in alignment with TANF. Local officials told us that these differences in reporting rules often resulted in confusion on the part of food stamp participants, particularly because most participate in other assistance programs. They explained that although the caseworkers provided information to help participants determine which changes they were required to report (i.e., changes that increased their income to over 130 percent of the federal poverty level), some participants still reported changes that were not required. According to a recent case study, some participants may believe they need to report these changes to maintain their food benefits. On the other hand, local officials told us that some participants think that the new reduced reporting requirements apply to other assistance programs in addition to the Food Stamp Program. Consequently, some participants do not report changes they are required to report for these other assistance programs, and in some cases, participants might face interruptions in benefits or penalties for not reporting changes for other programs. In addition to reporting complications for participants, local food stamp officials on our site visits and in telephone interviews told us that different participant reporting rules for assistance programs are confusing for caseworkers because they are uncertain whether to act on a change for the Food Stamp Program when reported for another assistance program. Moreover, trying to determine whether to act on a change for the Food Stamp Program can cause them to perform additional work. When a participant reports a change that is required for Medicaid or TANF, but not for food stamps, caseworkers must decide whether to act on that change for the Food Stamp Program. Caseworkers, who often determine benefits for more than one assistance program, first must decide if a change will increase the participant’s food stamp benefit. To make this decision, caseworkers typically enter the information into the computer system as if they were going to act on the change in order to determine if the change will result in an increase in the participant’s food stamp benefit. If the caseworker determines that the change reported by the participant will increase the participant’s benefit, caseworkers are required to act on the change. On the other hand, if the caseworker determines that the change reported by the participant will decrease the benefit, the caseworker must then determine whether or not to act on this change. (See fig. 7 for one example of how this process would work.) FNS regulations mandate that states not act on changes that would result in a decrease in benefits for participants unless one of three exceptions is met: (1) the household voluntarily requests that the case be closed, (2) the participant’s TANF (or, in some areas, General Assistance) grant is changed, or (3) the information about the change is considered “verified upon receipt.” A reported change is considered verified upon receipt when the information is not questionable and the provider is its primary source, such as information about earnings provided by the participant’s employer. Many local officials suggested that aligning food stamp reporting rules with Medicaid and TANF, by making them the same across these programs, would help to simplify this process. State officials generally believed that the Expanded Simplified Reporting option would help states reduce their food stamp payment error rates. However, local officials told us that caseworkers’ confusion about the reporting rules for different assistance programs could result in improper food stamp and other assistance program benefits. A recent case study found that caseworkers were concerned that they might make errors in benefits because of the complexity of the decision-making process involved in determining when to act or not to act on a change. Moreover, supervisors told us that payment error rates of other assistance programs might increase if participants do not report required changes to these assistance programs because they believe the Expanded Simplified Reporting rules apply to these other programs. In an attempt to address these issues, many states have modified this option in a way that may undermine some of its benefits. Officials in 17 of the 33 states that implemented this option told us that rather than having caseworkers decide whether or not to act on a change, they have a waiver from FNS that requires caseworkers to act on all changes reported by participants, including those that would decrease benefits. Some states choosing this waiver did so because acting on some but not all changes would require significant reprogramming of their computer systems and may be difficult for their caseworkers to understand. However, acting on all changes counteracts the potential reduction in workload for caseworkers. Further, when the participant reports a change during the reporting period, having the waiver does not reduce exposure to errors in the way that the option does for states without the waiver. In short, the more changes caseworkers make, the more opportunity there is for a change to be processed incorrectly. In addition, in certain circumstances, a change might result in lower benefits for participants in states with this waiver as opposed to states without this waiver. In April 2004, USDA proposed some revisions to simplified reporting regulations in order to help alleviate some of these complications with this waiver. USDA proposed that state agencies that have this waiver not be required to act on changes a household reports for another public assistance program when the change does not trigger action in that other program. For example, if a household receiving food stamps and Medicaid reports an increase in income to its Medicaid caseworker that is not required to be reported for food stamp purposes, the state agency would not have to reduce the household’s food stamp benefit if the income change does not affect its Medicaid eligibility or benefits. This proposed change would simplify the procedure for caseworkers and, in some cases, eliminate the possibility that benefits would be reduced in states with this waiver. However, while this proposal addresses issues for caseworkers and participants in states with this waiver, we found that local officials in states without the waiver were more likely to report that this option introduced complications for caseworkers than local officials in states with the waiver. States have flexibility to align the reporting rules for Medicaid and TANF with their food stamp reporting rules available under the Expanded Simplified Reporting option, but many have not done so. Although one of the three states we visited achieved some alignment of reporting rules between TANF and food stamps, none of the three states, despite preliminary discussions between Medicaid and food stamp officials, had been successful in aligning Medicaid and food stamp reporting rules. Food stamp officials in these states told us the discussions had not resulted in alignment of reporting rules largely because Medicaid officials believed that Medicaid benefit costs could increase. For example, if a participant experienced a household change that would not affect the participant’s food stamp benefit but would affect Medicaid eligibility, the participant might receive Medicaid benefits for longer periods than he or she would have under a state’s current reporting rules. Thus Medicaid benefit costs could increase. A recent study of four states found that states are often reluctant to make changes in policies that may increase TANF or Medicaid benefit costs or caseloads, particularly when states experience budget shortfalls. For example, because states contribute a nationwide average of 43 percent to Medicaid benefit costs (while food stamp benefits are 100 percent federally funded), increases in Medicaid caseloads or costs would place demands on state budgets that increases in food stamp caseloads would not. In addition, another report noted that changes to rules and procedures typically require that a state reprogram its computer to apply the new policies, and these changes may result in increased cost to the state. However, the extent to which program costs might increase as a result of alignment is unclear, and in two of the three states we visited, state officials had little or no information on possible costs associated with implementing such changes. A case study also noted that in some states, staff responsible for these various benefit programs work in different agencies with varied priorities, and there is no incentive to coordinate policy across these programs. Finally, an official from HHS’s Centers for Medicare and Medicaid Services (CMS) noted that there are numerous groups of eligible Medicaid participants, and many groups, depending on state eligibility rules, may receive continuous eligibility for 12 months. For these participants, reporting on a 6-month schedule for Medicaid would not be appropriate. Two additional options introduced complications in program rules, though to a lesser extent. Some local officials reported that the Transitional Benefits option introduced complications for the caseworkers, again because of interactions between this option and other assistance programs. For example, transitional benefits from Medicaid are for persons transitioning to work and are provided for up to 1 year. On the other hand, transitional food stamp benefits are for persons leaving TANF and are granted for a maximum of 5 months. In addition, program experts told us that reporting rules for the two types of transitional benefits are not aligned, and this creates an additional administrative burden for caseworkers. Medicaid requires persons receiving transitional benefits to report household financial circumstances at the 4th, 7th, and 10th month of transitional benefits, whereas persons receiving food stamp transitional benefits must reapply at the end of the 5th month. About a third of local officials reported that they would like transitional food stamp benefits to be available for 6 months or to be aligned with transitional benefits from Medicaid. Finally, some local officials reported that the Simplified Determination of Deductions option introduced complications for the participants and the caseworkers. For example, local officials told us that this option complicates decisions about whether to act on changes reported by participants. Local officials told us that when participants report a change that is not required under the deductions option, caseworkers must first determine if the household is subject to reporting rules under Expanded Simplified Reporting or not. If the household falls under Expanded Simplified Reporting, the caseworkers must follow the decision-making process for Expanded Simplified Reporting depicted in figure 7 above. If the household does not fall under Expanded Simplified Reporting and the change is to a deduction from household income, the caseworkers must not act on the change. Since the late 1990s, and most recently in the Farm Bill, the Congress and FNS have offered states a number of options to simplify and streamline the administration of the Food Stamp Program. These options presented states with additional opportunities to tailor their Food Stamp Programs to the social and economic needs of their own states. Moreover, these changes coincided with actions taken by the Congress to grant states considerable flexibility in the design and administration of other key assistance programs, such as TANF and Medicaid, and the growing realization that the Food Stamp Program provides crucial support to low- income working families. Local officials, who have day-to-day contact with frontline caseworkers and food stamp participants, reported mixed results from implementing the options. Although they reported some improvements for both caseworkers and participants from some options, no option received consistently positive reports in all the areas where state officials expected improvements when they selected the option. In fact, in many cases, officials were as likely to report that an option resulted in no change as they were to report improvements. This may be due in part to the fact that the Farm Bill options made only slight changes to policy and, as reported, affected relatively few program participants. Of all the options, the Expanded Simplified Reporting option offered the most promise because it was selected by the most states, affects a large number of participants, and has the potential to significantly streamline the participant reporting process. The fact that local officials reported that adopting this option actually complicated program rules in many states reflects the challenge of trying to simplify requirements for one program without efforts by states to adjust the rules of other related assistance programs. This is particularly relevant because most food stamp recipients also participate in other assistance programs. The reported complications resulted in problems, such as confusion for the caseworker and a possible increase in payment errors. In response, many states adopted a waiver that negated many of the potential benefits of Expanded Simplified Reporting for caseworkers and participants. Although USDA proposed a change to this waiver, the change will not address the complications reported by local officials in states without the waiver. Moreover, neither the waiver to act on all changes nor USDA’s proposed change to the waiver will address overall alignment issues related to reporting rules among various assistance programs. Although federal law and program rules allow states to align participant reporting rules among assistance programs, state officials in most states have not made the broad changes that would result in greater consistency among programs. Concerns regarding whether there are costs associated with aligning participant reporting requirements may hinder a state’s decision to make program changes that increase alignment. These concerns may include the cost of programming changes into state computers and the concern that benefit costs may increase in those programs that require a higher proportion of state funds, such as the Medicaid program. On the other hand, savings could result from reducing the administrative burden on caseworkers. Yet it is unclear whether costs would rise or savings would be realized. In addition, aligning Medicaid reporting rules with food stamp rules may work for some groups of Medicaid participants, but not others. Although alignment of state program rules may not be advantageous in every circumstance, many government officials told us that they were interested in improved alignment. In general, increased alignment remains important to simplification and ease of service delivery. In order to take advantage of existing opportunities available to states for streamlining participant reporting rules, we recommend that the Secretary of Agriculture direct FNS to collaborate with HHS to take the following two actions: 1. Encourage state officials to explore the advantages and disadvantages—in terms of both administrative and benefit costs and savings—of better aligning participant reporting rules in their states, particularly for Medicaid and TANF, and 2. Disseminate information and guidance to states on the opportunities available for better aligning participant reporting requirements among food stamps, Medicaid, and TANF. We provided a draft of this report to the U.S. Department of Agriculture for review and comment and on August 20, 2004, we met with FNS officials to get their comments. The officials said they agreed with our findings, conclusions, and recommendations. They stated that they are interested in helping states better align their participant reporting requirements and that they plan to contact HHS to initiate discussions on ways to help states align these reporting requirements. They also said they plan to provide best practices information to states regarding the administration of the Food Stamp Program and that they would explore disseminating information on any progress states have made in streamlining their participant reporting rules. FNS provided us with technical comments, which we incorporated where appropriate. We are sending copies of this report to the Secretary of Agriculture, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 if you or your staffs have any questions about this report. Major contributors to this report are listed in appendix V. To accomplish our research objectives, we surveyed state food stamp administrators and local food stamp supervisors on the implementation of the Farm Bill food stamp options. To augment information from our state and local surveys, we conducted three comprehensive site visits (Arizona, Maryland, and Michigan) and two semi-structured telephone interviews (Colorado and South Carolina). We chose states for our site visits and telephone interviews to capture variation in the following criteria: (a) number of and type of selected options, (b) number of food stamp participants and program participation rate, (c) program error rate, and (d) entity (state or county) administering the Food Stamp Program. During each visit we met with state officials administering and developing policy for the Food Stamp Program, local officials in the office where services are provided, and officials responsible for other key assistance programs, such as Temporary Assistance for Needy Families (TANF) and Medicaid. We also reviewed Farm Bill legislation and related committee reports, and we reviewed Food and Nutrition Service (FNS) reports and other program analysis. We held discussions with program stakeholders, including officials at FNS headquarters and regional offices, representatives of advocacy organizations, and other program experts. We performed our work from August 2003 to June 2004 in accordance with generally accepted government auditing standards. To learn about state-level use of the food stamp options made available under the Farm Bill, we conducted a Web-based survey of food stamp administrators in the 50 states and the District of Columbia. For each of the eight Farm Bill options, we asked state officials to provide information on whether or not their state had chosen and implemented the option, reasons for choosing (or not choosing) the option, program challenges in implementing the option, changes because of the options, and potential improvements to the option. In addition, we asked for other information, including cost estimates for implementing the options, estimates of time and cost savings as a result of implementing the options, efforts to align the Food Stamp Program with TANF and Medicaid, and other food stamp options states had implemented prior to the Farm Bill. We administered the survey between December 9, 2003, and January 30, 2004. We also contacted some respondents via phone or e-mail to clarify their responses after the Web survey was completed. Food stamp administrators in all 50 states and the District of Columbia participated in the survey, for a response rate of 100 percent. To view selected results of GAO’s Web-based survey of food stamp adminstrators, go to www.gao.gov/cgi-bin/getrpt?GAO-04-1058SP. We believe the state survey data are sufficiently reliable to be used for the applicable questions of our work. We pretested the survey with several state Food Stamp administrators and modified the survey to take their comments into account. We also compared our survey responses on which of the states had implemented the options with information published by FNS and found our data had a reasonable level of consistency with the agency’s data, with the exception of data for the Simplified Homeless Shelter Costs option. Our analysis indicated fewer states had implemented this option than are listed in the FNS report. The cause of the discrepancy is that many states were already using a homeless shelter allowance of $143 prior to the Farm Bill, and many of these states are included in the information published by FNS as having implemented the Farm Bill option. However, for the purposes of our study, we decided to limit our analysis to only those states that implemented the homeless shelter allowance of $143 after the Farm Bill became effective. To learn about local-level use of the Farm Bill options, we administered 1,328 mailed surveys to supervisors in local food stamp offices in the states that had implemented the options. These survey results are generalizable to local offices in states that implemented the options. We conducted a separate survey for each of the eight options and used a separate sample for each of the surveys. On all eight surveys, we asked supervisors in local offices for their opinions about the extent to which the Farm Bill option had affected change in several areas of the Food Stamp Program, including administrative burden on participants and caseworkers, the error rate, program participation, and alignment with other assistance programs. In addition, we asked the supervisors for their opinions about the proportion of the local office’s food stamp caseload that was affected by the option and changes the local office officials would like to see to the option. To view the results from the local food stamp office surveys, go to www.gao.gov/cgi-bin/getrpt?GAO-04-1059SP. We chose to survey food stamp supervisors because we believed they would be aware of the changes for participants and caseworkers resulting from the Farm Bill options. We collected the opinion of these supervisors because we did not find existing data on the information we needed to complete the objectives of this study, including the number of food stamp recipients affected by each option and the time costs or savings for food stamp participants and caseworkers because of the implementation of the options. We conducted the surveys between December 2003 and April 2004. We also contacted some respondents via phone or e-mail to clarify their responses after the mailed survey was completed. For each Farm Bill option, the population of interest was the set of all local food stamp offices located in states that adopted the option. Because we could not survey the entire population of local offices, we selected a sample of local offices to be representative of this population of interest. In each sample, the sampling unit is the local food stamp office. To determine the eight samples, we contacted state and county food stamp officials to compile a complete mailing list of food stamp offices in the 50 states and the District of Columbia. We compiled our own list because we were unaware of any other such comprehensive list. From these lists of local offices, we selected a simple random sample of local offices located in states that, according to information provided by FNS, had already implemented the option. For example, if the FNS report indicated 12 states had implemented an option, we drew the sample for that option from the combined list of the local offices in those 12 states. Since many states had chosen multiple options, we capped the number of surveys a local office could receive at three in order to minimize response burden. Only one local office was randomly selected to receive more than three surveys. To make sure this office did not receive more than three surveys, we randomly selected two of the five options for which we had drawn this office. We then randomly selected two replacement offices to receive the surveys. To select the replacement offices, we used the remaining offices on the list. Because we surveyed a random sample of local food stamp offices, our results are estimates of the responses we would have received had we surveyed the entire population of interest, and are thus subject to sampling errors. We are 95 percent confident that each of the confidence intervals in the local survey results will contain the true values of the population of interest. All percentage estimates from the local survey have sampling errors of plus or minus 10 percentage points. We calculated confidence intervals for our local survey results using methods that are appropriate for probability samples of this type. In addition to sampling errors, the practical difficulties in conducting surveys of this type may introduce other types of errors, commonly referred to as nonsampling errors. For example, questions may be misinterpreted, the respondents’ answers may differ from those in local offices that did not respond, or errors could be made in keying completed questionnaires or in the preparation of data files for analysis. We took steps in the development, collection, and analysis of the local surveys to minimize these errors. For example, we pretested each of the eight local surveys with at least one local food stamp official prior to mailing the surveys. The response rates for the eight surveys ranged from 74.0 percent to 86.1 percent (see table 1 below). Some respondents returned the survey to us but indicated that their local office had not implemented the option we asked them about or that they implemented the option prior to the date the Farm Bill became effective. We refer to these surveys as “out of scope.” There are several reasons surveys could be out of scope, including the time lag between the FNS report we used to determine our sample and the launch of our survey and possible delays in state-level policy decisions being implemented on the local level. Given how quickly the status of the Farm Bill options can change in states, the number of out of scopes is not surprising. In this report we did not use out-of-scope surveys in the estimates derived from local survey data. We did not use the data we collected from the local survey on the Simplified Homeless Shelter Costs option because we had used the FNS list of states that had implemented the option to draw our sample, but later we learned of the discrepancy between our definition of the option and the data provided by FNS that had implemented this option. We concluded our sample for this option was flawed and the results should not be used in the local survey analysis. (Total number of responses – Number of out of scopes) (Total sample size – Number of out of scopes) Appendix II: Farm Bill Options That States Have Implemented as of January 2004 Simplified Homeless Shelter Costs option only includes states that indicated they did not have a Standard Homeless Shelter Allowance of $143 prior to the Farm Bill. Appendix III: Selected Responses to State Survey Simplified Homeless Shelter Costs option only includes states that indicated they did not have a Standard Homeless Shelter Allowance of $143 prior to the Farm Bill. We only asked child support item for Child Support Expense Income Exclusion option. (N = 9) (N = 12) income (N = 7) (N = 12) benefits (N = 27) (N = 35) (N = 44) We only asked child support item for Child Support Expense Income Exclusion option. Appendix IV: Selected Responses to Local Surveys (20-39%) (40-59%) (60-79%) (80-99%) (100%) 20% (20-39%) (40-59%) (60-79%) (80-99%) (100%) 1% (20-39%) (40-59%) (60-79%) (80-99%) (100%) 2% (20-39%) (40-59%) (60-79%) (80-99%) (100%) 1% (20-39%) (40-59%) (60-79%) (80-99%) (100%) 7% (20-39%) (40-59%) (60-79%) (80-99%) (100%) 0% (20-39%) (40-59%) (60-79%) (80-99%) Katharine Leavitt and Anne Welch also made significant contributions to this report. In addition, Carl Barden, Kevin Jackson, MacDonald Phillips, and Jay Smale were responsible for sampling, survey design, and data analysis,and Corinna Nicolaou assisted in the report development. Food Stamp Program: Steps Have Been Taken to Increase Participation of Working Families, but Better Tracking of Efforts Is Needed. GAO-04-346. Washington, D.C.: March 5, 2004. Food Stamp Employment and Training Program: Better Data Needed to Understand Who Is Served and What the Program Achieves. GAO-03-388. Washington, D.C.: March 12, 2003 Food Stamp Program: States’ Use of Options and Waivers to Improve Program Administration and Promote Access. GAO-02-409. Washington, D.C.: February 22, 2002. Food Stamp Program: States Seek to Reduce Payment Errors and Program Complexity. GAO-01-272. Washington, D.C.: January 19, 2001. Means-Tested Programs: Determining Financial Eligibility Is Cumbersome and Can Be Simplified. GAO-02-58. Washington, D.C.: November 2, 2001. Food Stamp Program: Various Factors Have Led to Declining Participation: GAO/RCED-99-185. Washington, D.C.: July 2, 1999. | Many individuals familiar with the Food Stamp Program view its rules as unnecessarily complex, creating an administrative burden for participants and caseworkers. In addition many participants receive benefits from other programs that have different program rules, adding to the complexity of accurately determining program benefits and eligibility. The 2002 Farm Bill introduced new options to help simplify the program. This report examines (1) which options states have chosen to implement and why, and (2) what changes local officials reported as a result of using these options. Selected results from GAO's web-based survey of food stamp administrators are provided in an e-supplement to this report, GAO-04-1058SP . Another e-supplement, GAO-04-1059SP , contains results from the local food stamp office surveys. As of January 2004, states chose four of the eight Farm Bill options with greater frequency than the others. These options provided states with more flexibility in requiring participants to report changes and in determining eligibility. The most common reasons state officials gave for choosing the eight options were to simplify program rules for participants and caseworkers. Local food stamp officials reported mixed results from implementing the Farm Bill options. Although they reported some improvements for both caseworkers and participants from some options, no option received consistent positive reports in all the areas where state officials expected improvements. In fact, in many cases, officials were as likely to report that an option resulted in no change as they were to report improvements. Moreover, many local officials reported that three options introduced complications in program rules. One option that offered the most promise because it was selected by most states and affects a large number of participants resulted in food stamp participant reporting rules that differed from Medicaid and TANF. These differences resulted in confusion for food stamp participants and caseworkers, and some changes were made that undermined the intended advantages of the option. These problems reflect the challenge of trying to simplify rules for one program without making the rules of other related programs the same. Concerns about whether there are costs associated with aligning reporting rules may hinder a state's decision to pursue alignment; yet the extent to which program costs might increase as a result of making reporting rules the same is unclear. |
Five ESE sites collectively contain substantial quantities of Category I special nuclear material. These include the following: the Savannah River Site near Aiken, South Carolina, and the Hanford Site in Richland, Washington, which are managed by the Office of Environmental Management; the Idaho National Engineering and Environmental Laboratory and the Argonne National Laboratory-West, which are located in Idaho Falls, Idaho, and are managed by the Office of Nuclear Energy, Science and Technology; and the Oak Ridge National Laboratory in Oak Ridge, Tennessee, which is managed by the Office of Science. Contractors operate each site for ESE. DOE has requested over $300 million in fiscal year 2006 for security at these five sites. Within DOE’s Office of Security and Safety Performance Assurance, DOE’s Office of Security develops and promulgates orders and policies to guide the department’s safeguards and security programs. DOE’s overall security policy is contained in DOE Order 470.1, Safeguards and Security Program, which was originally approved in 1995. The key component of DOE’s approach to security is the DBT, a classified document that identifies the characteristics of the potential threats to DOE assets. A classified companion document, the Adversary Capabilities List, provides additional information on terrorist capabilities and equipment. The DBT traditionally has been based on a classified, multiagency intelligence community assessment of potential terrorist threats, known as the Postulated Threat. The threat from terrorist groups is generally the most demanding threat contained in the DBT. DOE counters the terrorist threat specified in the DBT with a multifaceted protective system. While specific measures vary from site to site, all protective systems at DOE’s most sensitive sites employ a defense-in- depth concept that includes the following: a variety of integrated alarms and sensors capable of detecting intruders; physical barriers, such as fences and antivehicle obstacles; numerous access control points, such as turnstiles, badge readers, vehicle inspection stations, radiation detectors, and metal detectors; operational security procedures, such as a “two person” rule that prevents only one person from having access to special nuclear material; and hardened facilities and vaults. Each site also has a heavily armed protective force that is often equipped with such items as automatic weapons, night vision equipment, body armor, and chemical protective gear. These protective forces are comprised of Security Police Officers who are classified into three groups: Security Police Officer-I, Security Police Officer-II, and Security Police Officer-III. Security Police Officer-Is are only assigned to fixed, armed posts. Generally, very few of these officers are used at ESE sites because of the limited roles they can fill. Security Police Officer-IIs generally are assigned to posts such as access control booths, or to foot or vehicle patrols. Finally, Security Police Officer-IIIs are responsible for operations such as hostage rescue and the recapture and recovery of special nuclear material. According to federal regulations, Security Police Officer-IIIs have more demanding physical fitness and training standards than Security Police Officer-Is or Security Police Officer-IIs. The ESE sites we visited employ about 1,000 Security Police Officer-IIs and Security Police Officer-IIIs. ESE protective forces work for private contractors and are unionized. Protective force duties and requirements, such as physical fitness standards, are explained in detail in DOE Manual 473.2-2, Protective Force Program Manual, as well as in DOE regulations (10 C.F.R. pt. 1046, Physical Protection of Security Interests). DOE issued the current Protective Force Program Manual in June 2000. Although protective forces are expected to comply with the duties and requirements established in DOE policies, deviations from these policies are allowed as long as certain approval and notification criteria are met. In addition to complying with these security requirements, DOE protective systems, including protective forces, also must meet performance standards. For example, DOE sites are required to demonstrate that their protective systems are capable of defending special nuclear material against terrorist forces identified in the DBT. The performance of protective systems is formally and regularly examined through vulnerability assessments. A vulnerability assessment is a systematic evaluation process in which qualitative and quantitative techniques are applied to detect vulnerabilities and arrive at effective protection of specific assets, such as special nuclear material. To conduct such assessments, DOE uses, among other things, subject matter experts, such as U.S. Special Forces; computer modeling to simulate attacks; and force- on-force exercises, in which the site’s protective forces undergo simulated attacks by a group of mock terrorists. In addition to their use in evaluating the effectiveness of physical protection strategies, DOE believes force-on- force exercises are the most realistic representation of adversary attacks that can be used to train protective forces. Protective forces at the five ESE sites containing Category I special nuclear material generally meet existing key DOE readiness requirements. Specifically, we determined that ESE protective forces generally comply with DOE standards for firearms proficiency, physical fitness levels, and equipment standardization and that the five ESE sites had the required training programs, facilities, and equipment. In addition, we found that the majority of the 105 protective force members we interviewed at ESE sites generally believe that they currently are ready to perform their mission of protecting the site’s special nuclear material. However, we did find some weaknesses at ESE sites that could impair the ability of ESE protective forces to defend their sites. A ready force should possess a sufficient number of experienced, trained, and properly equipped personnel. Through realistic and comprehensive training, these personnel are forged into a cohesive unit that can perform its tasks even under extreme conditions. DOE orders and federal regulations establish the framework for ensuring that DOE protective forces are ready to perform their mission. We found that ESE protective force officers generally believe that they are ready to perform their mission. Specifically, 102 of the 105 officers we interviewed stated that they believed that they, and their fellow officers, understood what was expected of them should the site be attacked by a terrorist group. Moreover, 65 of the 105 officers rated the readiness of their site’s protective force as high, while 20 officers rated their protective force as somewhat or moderately ready to defend the site. Only a minority of the officers (16 of 105) we interviewed rated the readiness of their force to defend their sites as low. In addition, the majority of officers we interviewed believed they and the protective force officers with whom they worked on a regular basis have formed a cohesive unit that would be able to perform their most essential mission—that of protecting special nuclear material. For example, of the 105 officers we interviewed, 84 officers responded that they had a high degree of confidence in their fellow officers in the event of a terrorist attack, and 88 reported that their fellow officers would be willing to risk their lives in defense of their site. As called for in DOE’s Protective Force Program Manual, readiness is achieved through appropriate training and equipment. Each of the five sites we visited had formally approved annual training plans. Each site generally had the training facilities, such as firearms ranges, classrooms, computer terminals, and exercise equipment, which enabled them to meet their current DOE and federal training requirements. Furthermore, each site maintained computerized databases for tracking individual protective force officers’ compliance with training requirements. To determine if these programs and facilities were being used to implement the DOE requirements and federal regulations, we focused on three key areas— firearms proficiency, physical fitness, and protective force officer equipment. Firearms Proficiency. DOE’s Protective Force Program Manual states that protective force officers must demonstrate their proficiency with the weapons that are assigned to them every 6 months. According to the training records of the 105 protective force officers we interviewed, 79 had met this proficiency requirement with their primary weapon, the M-4 or M- 16 semiautomatic rifle. Of the 26 officers who had not met this requirement within the 6 month time frame, 11 officers were all located at one site and 8 of these 11 officers did not meet the requirement until 2 to 5 months after the required time. According to an official at this site, seven of the eight officers could not complete the requirement in a timely fashion because the site’s firing range was closed for the investigation of an accidental weapon discharge that had resulted in an injury to a protective force officer. We determined that 2 of the 26 officers did not complete the requirement for medical reasons. We were not given reasons why the remaining officers did not meet the requirement. Physical Fitness. Under DOE regulations, DOE’s contractors’ protective force personnel who are authorized to carry firearms must meet a minimum standard for physical fitness every 12 months. There are two standards for such personnel—Offensive Combative and Defensive Combative. All Security Police Officer-IIIs, which include DOE special response team members, must meet the Offensive Combative standard, which requires a 1-mile run in no more than 8 minutes 30 seconds and a 40-yard prone-to-running dash in no more than 8 seconds. All other protective officers authorized to carry firearms must meet the Defensive Combative standard, which requires a one-half mile run in no more than 4 minutes 40 seconds and a 40-yard prone-to-running dash in no more than 8.5 seconds. According to the training records of the 105 protective force officers we reviewed, 103 of the 105 protective force officers had met the standard required by federal regulation for their position. Two officers who did not meet the requirement were on medical restriction. The records for another officer showed him as having met the requirement, but additional records provided by the site showed the officer had completed the run in a time that exceeded the standard. Site officials could not provide an explanation for this discrepancy. Protective Officer Equipment. DOE’s Protective Force Program Manual sets a number of requirements for protective force equipment. For example, all Security Police Officers are required to carry a minimum set of equipment, including a portable radio, a handgun, and an intermediate force weapon such as a baton. In addition, a mask to protect against a chemical attack must be carried or available to them. All Security Police Officer-IIs and Security Police Officer-IIIs must also have access to personal protective body armor. In addition, firearms must be kept serviceable at all times and must be inspected by a DOE-certified armorer at least twice a year to ensure serviceability. Issued firearms must be inventoried at the beginning of each shift, an inventory of all firearms in storage must be conducted weekly, and a complete inventory of all firearms must be conducted on a monthly basis. Finally, DOE protective forces equipment must be tailored to counter adversaries identified in the DBT. To this end, sites employ a variety of equipment, including automatic weapons, night vision equipment, and body armor. In most cases, each site’s protective forces carried or had access to the required minimum standard duty equipment. Most sites demonstrated that they had access to certified armorers, and each site maintained the required firearms maintenance, inspection, and inventory records, often kept in a detailed computerized database. The appropriate policies and procedures were also in place for the inventory of firearms. In addition, some sites have substantially increased their protective forces weaponry since September 11, 2001, or have plans to further enhance these capabilities to meet the 2004 DBT. While protective forces at ESE sites are generally meeting current DOE requirements, we identified some weaknesses in ESE protective force practices that could adversely affect the current readiness of ESE protective forces to defend their sites. These include protective force officers’ lack of participation in realistic force-on-force exercises; the frequency and quality of training opportunities; the lack of dependable communications systems; insufficient protective gear, including protective body armor and chemical protective gear; and the lack of armored vehicles. Performance Testing and Training. According to DOE’s Protective Force Program Manual, performance tests are used to evaluate and verify the effectiveness of protective force programs and to provide needed training. A force-on-force exercise is one type of performance test during which the protective force engages in a simulated battle against a mock adversary force, employing the weapons, equipment, and methodologies postulated in the DBT. DOE believes that force-on-force exercises are a valuable training tool for protective force officers. Consequently, DOE policy requires that force-on-force exercises be held at least once a year at sites that possess Category I quantities of special nuclear material or Category II quantities that can be rolled up to Category I quantities. However, DOE neither sets standards for individual protective force officers’ participation in these exercises, nor requires sites to track individual participation. While 84 of the 105 protective force officers we interviewed stated they had participated in a force-on-force exercise, only 46 of the 84 protective force officers believed that the force-on-force exercises they had participated in were either realistic or somewhat realistic. Additionally, protective force officers often told us that they did not have frequent and realistic tactical training. In this regard, 33 of the 84 protective force officers reported that safety considerations interfered with the realism of the force-on-force exercises, with some protective force officers stating that they were limited in the tactics they could employ. For example, some protective force officers stated that they were not allowed to run up stairwells, climb fences, or exceed the speed limit in patrol vehicles. Contractors’ protective force managers agreed that safety requirements limited the kind of realistic force-on-force training that are needed to ensure effective protective force performance. Communications Equipment. According to DOE’s Protective Force Program Manual, the radios protective force officers use must be capable of two-way communications, provide intelligible voice communications, and be readily available in sufficient numbers to equip protective force personnel. In addition, a sufficient number of batteries must be available and maintained in a charged condition. Protective force officers at all five of the sites we visited reported problems with their radio communications systems. Specifically, 66 of the 105 protective force officers reported that they did not always have dependable radio communications, with 23 officers identifying sporadic battery life, and 29 officers reporting poor reception at some locations on site as the two most significant problems. In addition, some of the protective force officers believed that radio communications were not sufficient to support their operations and could not be relied on if a terrorist attack occurred. Site security officials at two sites acknowledged that efforts were under way to improve radio communications equipment. In addition, security officials said other forms of communications, such as telephones, cellular telephones, and pagers, were provided for protective forces to ensure that they could communicate effectively. Protective Body Armor. DOE’s Protective Force Program Manual requires that Security Police Officer-IIs and -IIIs wear body armor or that body armor be stationed in a way that allows them to quickly put it on to respond to an attack without negatively impacting response times. At one site, we found that most Security Police Officer-IIs had not been issued protective body armor because the site had requested and received in July 2003 a waiver to deviate from the requirement to equip all Security Police Officer-IIs with body armor. The waiver was sought for a number of reasons, including the (1) increased potential for heat-related injuries while wearing body armor during warm weather, (2) increased equipment load that armor would place on protective force members, (3) costs of acquiring the necessary quantity of body armor and the subsequent replacement costs, and (4) associated risks of not providing all Security Police Officer-IIs with body armor could be mitigated by using cover provided at the site by natural and man-made barriers. According to a site security official, this waiver is currently being reviewed because of the increased threat contained in the 2004 DBT. Special Response Team Capabilities. Security Police Officers-IIIs serve on special response teams responsible for offensive operations, such as hostage rescue and the recapture and recovery of special nuclear material. Special response teams are often assigned unique equipment, including specially encrypted radios; body armor that provides increased levels of protection; special suits that enable officers to operate and fight in chemically contaminated environments; special vehicles, including armored vehicles; submachine guns; light machine guns; grenade launchers; and precision rifles, such as Remington 700 rifles and Barrett .50 caliber rifles. These response teams are also issued breaching tools to allow them to reenter facilities to which terrorists may have gained access. Each site with Category I special nuclear material must have a special response team capability available on a continuous basis. However, one ESE site does not have this capability and, instead, relies on another organization, through a formal memorandum of understanding, to provide a special response team. This arrangement, however, has not been comprehensively performance-tested, as called for in the memorandum of understanding. Site officials state that they will soon conduct the first comprehensive performance test of this memorandum of understanding. Chemical Protective Gear. DOE’s Protective Force Program Manual specifies that all Security Police Officer-IIs and -IIIs be provided, at a minimum, with protective masks that provide for nuclear, chemical, and biological protection. Other additional chemical protective gear and procedures are delegated to the sites. At the four sites with special response teams, we found that the teams all had special suits that allowed them to operate and fight in environments that might be chemically contaminated. For Security Police Officers-IIs, chemical protective equipment and expectations for fighting in chemically contaminated environments varied. For example, two sites provided additional protective equipment for their Security Police Officer-IIs and expected them to fight in such environments. Another site did not provide additional equipment but expected its Security Police Officer-IIs to evacuate along with other site workers. Finally, the one site that did not have a special response team expected its Security Police Officer-IIs to fight in chemically contaminated environments. However, the site provided no additional protective gear for its officers other than standard-duty issue long-sleeved shirts and the required protective masks. Protective Force Vehicles. We found that ESE sites currently do not have the same level of vehicle protection as National Nuclear Security Administration (NNSA) sites that also have Category I special nuclear material. Specifically, while not a DOE requirement, all NNSA sites with Category I special nuclear material currently operate armored vehicles. However, only one of the five ESE sites with Category I special nuclear material operated armored vehicles at the time of our review. One other ESE site was planning to deploy armored vehicles. To successfully defend against the larger terrorist threat contained in the 2004 DBT by October 2008, DOE and ESE officials recognize that they need to take several actions. These include transforming its current protective force into an elite force, developing and deploying new security technologies, consolidating and eliminating special nuclear material, and making organizational improvements within ESE’s security program. However, because these initiatives, particularly an elite force, are in early stages of development and will require a significant commitment of resources and coordination across DOE and ESE, their completion by the October 2008 DBT implementation deadline is uncertain. The status of these initiatives is as follows: Elite Forces. DOE officials believe that the way its sites, including those sites managed by ESE, currently train their contractor-operated protective forces will not be adequate to defeat the terrorist threat contained in the 2004 DBT. This view is shared by most protective force officers (74 out of 105) and their contractor protective force managers who report that they are not at all confident in their current ability to defeat the new threats contained in the 2004 DBT. In response, the department has proposed the development of an elite force that would be patterned after U. S. Special Forces and might eventually be converted from a contractor-operated force into a federal force. Nevertheless, despite broad support, DOE’s proposal for an elite force remains largely in the conceptual phase. DOE has developed a preliminary draft implementation plan that lays out high- level milestones and key activities, but this plan has not been formally approved by the Office of Security and Safety Performance Assurance. The draft implementation plan recognizes that DOE will have to undertake and complete a number of complex tasks in order to develop the elite force envisioned. For example, DOE will have to revise its existing protective forces policies to incorporate, among other things, the increased training standards that are needed to create an elite force. Since this proposal is only in the conceptual phase, completing this effort by the October 2008 DBT implementation deadline is unlikely. New Security Technologies. DOE is seeking to improve the effectiveness and survivability of its protective forces by developing and deploying new security technologies. It believes technologies can reduce the risk to protective forces in case of an attack and can provide additional response time to meet and defeat an attack. Sixteen of the 105 protective force officers we interviewed generally supported this view and said they needed enhanced detection technologies that would allow them to detect adversaries at much greater ranges than is currently possible at most sites. However, a senior DOE official recently conceded that the department has not yet taken the formal steps necessary to coordinate investment in emerging security technologies and that the role of technology in helping sites meet the new threats contained in the 2004 DBT by the department’s deadline of October 2008 is uncertain. Consolidation and Elimination of Materials. ESE’s current strategy to meet the October 2008 deadline relies heavily on consolidating and eliminating special nuclear material between and among ESE sites. For example, the Office of Nuclear Energy, Science and Technology plans to down-blend special nuclear material and extract medically useful isotopes at the Oak Ridge National Laboratory—an Office of Science site. This action would eliminate most of the security concerns surrounding the material. Neither program office, however, has been able to formally agree on its share of additional security costs, which have increased significantly because of the new DBT. In addition, neither ESE nor DOE has developed a comprehensive, departmentwide plan to achieve the needed cooperation and agreement among the sites and program offices to consolidate special nuclear material, as we recommended in our April 2004 report. In the absence of a comprehensive plan, completing most of these consolidation activities by the October 2008 DBT implementation deadline is unlikely. Organizational Improvements. The ESE headquarters security organization is not well suited to meeting the challenges associated with implementing the 2004 DBT. Specifically, there is no centralized security organization within the Office of the Under Secretary, ESE. The individual who serves as the Acting ESE Security Director has been detailed to the Office by DOE’s Office of Security and Safety Performance Assurance and has no programmatic authority or staff. This lack of authority limits the Director’s ability to facilitate ESE and DOE-wide cooperation on such issues as material down-blending at Oak Ridge National Laboratory and material consolidation at other ESE sites. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have. For further information on this testimony, please contact Gene Aloise at (202) 512-3841. James Noel, Jonathan Gill, Don Cowan, and Preston Heard made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | A successful terrorist attack on a Department of Energy (DOE) site containing nuclear weapons material could have devastating effects for the site and nearby communities. DOE's Office of the Under Secretary for Energy, Science and Environment (ESE), which is responsible for DOE operations in areas such as energy research, manages five sites that contain weapons-grade nuclear material. A heavily armed security force equipped with such items as automatic weapons protects ESE sites. GAO was asked to examine (1) the extent to which ESE protective forces are meeting DOE's existing readiness requirements and (2) the actions DOE and ESE will need to take to successfully defend against the larger, revised terrorist threat identified in the October 2004 design basis threat (DBT) by DOE's implementation deadline of October 2008. Protective forces at the five ESE sites containing weapons-grade nuclear material generally meet existing key DOE readiness requirements. Specifically, GAO determined that ESE protective forces generally comply with DOE standards for firearms proficiency, physical fitness levels, and equipment standardization and that the five ESE sites had the required training programs, facilities, and equipment. However, GAO did find some weaknesses at ESE sites that could adversely affect the ability of protective forces to defend these sites. For example, despite the importance of training exercises in which protective forces undergo simulated attacks by a group of mock terrorists (force-on-force exercises), DOE neither sets standards for individual protective force officers to participate in these exercises, nor does it require sites to track individual participation. GAO also found that protective force officers at all five of the ESE sites reported problems with their radio communications systems. Specifically, according to 66 of the 105 protective force officers GAO interviewed, they did not always have dependable radio communications as required by the DOE Manual 473.2-2, Protective Force Program Manual. Security officials stated that related improvements were under way. To successfully defend against the larger terrorist threat contained in the 2004 DBT by October 2008, DOE and ESE officials recognize that they will need to take several prompt and coordinated actions. These include transforming its current protective force into an elite, possibly federalized, force, developing and deploying new security technologies to reduce the risk to protective forces in case of an attack, consolidating and eliminating nuclear weapons material between and among ESE sites, and creating a sound ESE management structure that has sufficient authority to ensure coordination across all ESE offices that have weapons-grade nuclear material. However, because these initiatives, particularly an elite force, are in early stages of development and will require significant commitment of resources and coordination across DOE and ESE, their completion by the October 2008 DBT implementation deadline is uncertain. |
CCRCs are one of a number of options older Americans may choose to meet housing and other daily needs and especially to receive long-term care, which Medicare and private health insurance typically do not cover and which can be extremely costly. Older Americans may use a number of options to pay for their short- and long-term care as they age, including relying on savings or investments, purchasing long-term care insurance or annuities, entering into a reverse mortgage, or relying on government- financed programs such as Medicare and Medicaid. For CCRCs specifically, many use the proceeds from the sale of their homes and any retirement assets to pay for the housing and care arrangements. CCRCs are generally residential facilities established in a campus-like setting that provide access for older Americans to three levels of housing and care: independent homes or apartments where residents live much as they did in their own homes; assisted living, which provides help with the daily tasks of living; and skilled nursing care for those with greater physical needs. Most residents must be able to live independently when they enter into a contract with a CCRC, with the intent of moving through the three levels of care as their needs change. According to industry sources, the CCRC model has existed for over 100 years, starting with religious and fraternal organizations that provided care for older Americans who turned over their homes and assets to those organizations. As of July 2009, some 1,861 individual CCRCs existed in the United States, most of them nonprofit organizations. Over the last 2 decades, the CCRC industry has grown and diversified, with religious, fraternal, nonprofit, and for-profit entities operating CCRCs of various sizes that have different structures, residential and care choices, and payment options. CCRCs are primarily regulated by states rather than by the federal government. State CCRC regulation developed over time and in some instances grew out of the need to address financial and consumer protection issues, including insolvency, which arose in the CCRC industry in the 1970s and 1980s. States generally license CCRC providers, monitor and oversee their financial condition, and have regulatory provisions designed to inform and protect consumers. The U.S. Department of Health and Human Services (HHS) provides oversight of nursing facilities that are commonly part of CCRCs, but this oversight focuses on the quality of care and safety of residents in those facilities that receive payments under the Medicare and Medicaid programs. While states primarily regulate CCRCs, Congress has considered proposals to introduce greater federal oversight. For example, in 1977 Representatives William Cohen and Gladys Spellman introduced a bill that would provide federal oversight of certain continuing care institutions that received Medicare or Medicaid payments or were constructed with federal assistance. The bill proposed, among other things, requiring that CCRC contracts clearly explain all charges and that CCRCs provide full financial disclosures, maintain sufficient financial reserves, and undergo an annual audit. While the bill did not pass, one industry source noted that several states at the time were developing or refining their own CCRC regulation. CCRCs offer older Americans a range of housing and health care options that include independent living, assisted living, and skilled nursing units all within the same community. CCRCs generally offer independent living units such as apartments, cottages, town homes, or small single-family homes for incoming residents who are relatively healthy and self- sufficient. They also provide residents opportunities to arrange for certain convenience services, including meals, housekeeping, and laundry and provide amenities such as fitness centers, libraries, health clinics, and emergency services. While residents may move back and forth among the levels of care to meet changing health needs, residents generally move to a CCRC’s assisted living facility when they need assistance with specific activities of daily living, including eating, dressing, and bathing. CCRCs’ assisted living units are usually located separately from the independent living units and skilled nursing facilities. If a resident needs 24-hour monitoring, assistance, and care, CCRCs can offer skilled nursing care that includes supervision by nurses or other medical staff. CCRCs typically offer one of three general types of contracts that involve different combinations of entrance and monthly fee payments. Some CCRCs may offer residents a choice of the following contract types, while others may choose to offer only one. Type A, extensive or Life Care contracts, include housing, residential services, and amenities—including unlimited use of health care services— at little or no increase in monthly fees as a resident moves from independent living to assisted living, and, if needed, to nursing care. Type A contracts generally feature substantial entrance fees but may be attractive because monthly payments do not increase substantially as residents move through the different levels of care. As a result, CCRCs absorb the risk of any increases in the cost of providing health and long- term care to residents with these contracts. Type B, or modified contracts, often have lower monthly fees than Type A contracts, and include the same housing and residential amenities as Type A contracts. However, only some health care services are included in the initial monthly fee. When a resident’s needs exceed those services, the fees increase to market rates. For example, a resident may receive 30, 60, or 90 days of assisted living or nursing care without an increased charge. Thereafter, residents would pay the market daily rate or a discounted daily rate—as determined by the CCRC—for all assisted living or nursing care required and face the risk of having to pay high costs for needed care. Type C, or fee-for-service contracts, include the same housing, residential services, and amenities as Type A and B arrangements but require residents to pay market rates for all health-related services on an as- needed basis. Type C contracts may involve lower entrance and monthly fees while a resident resides in independent living, but the risk of higher long-term care expenses rests with the resident. Some CCRCs offer a fourth type of contract, Type D or rental agreements, which generally require no entrance fee but guarantee access to CCRC services and health care. Type D contracts are essentially pay-as-you-go: CCRCs charge monthly fees of residents based on the size of the living unit and the services and care provided. According to CCRC providers, prospective residents are generally screened to determine their general health status in order to determine the best living situation. Prospective residents must also submit detailed financial information that includes income and tax records to ensure that they can pay CCRC fees over time. Industry participants noted that entry fees—typically made as a large lump-sum payment—can represent a substantial portion, if not all, of potential residents’ assets. Residents must also be able to pay monthly fees, which typically cover housing and convenience services associated with housing and are based on the type of contract, size of the living unit, and level of care provided. As we have seen, these fees may also include all or some health care services. CCRCs use a variety of techniques to determine fees, including actuarial studies and financial analyses. For example, one CCRC we reviewed uses actuarial studies with mortality and morbidity tables to assess the likely inflow, outflow, and turnover of the CCRC occupants. Other CCRCs use some combination of resident statistics, Medicare and Medicaid reimbursement rates, marketing needs, and operating costs. Table 1 provides information on the range of entrance and monthly fee costs for the eight CCRCs we reviewed and illustrates how—depending on contract type—costs may change for consumers as they move among the independent, assisted, and skilled nursing living units. According to industry participants, building and operating a CCRC is a complex process that typically begins with an initial planning phase. During this phase, the company assembles a development team, makes financial projections, assesses market demand, and determines the kinds of housing and services to be offered. Initial and longer-term planning also entails assessing funding sources and seeking funding commitments from investors and lenders, particularly construction loans and state tax- exempt bond proceeds, where applicable. During the developmental phase, developers will presell units to begin building capital to fund construction of CCRC housing and other facilities and begin constructio structio n. n. Once the initial phases of construction are complete, CCRC providers have Once the initial phases of construction are complete, CCRC providers have move-in periods for new residents, continue marketing efforts to build move-in periods for new residents, continue marketing efforts to build toward full occupancy, complete construction, and begin making long toward full occupancy, complete construction, and begin making long - - term debt service payments (fig. 1). term debt service payments (fig. 1). CCRCs, like other businesses, face a number of risks during the start-up phase. First, actual construction costs and consumer demand may not match developers’ forecasts. To attract financing from lenders and ensure adequate underwriting for CCRC projects, developers need to generate sufficient pre-sales and deposits prior to construction to show a tangible commitment from prospective residents. In addition, facilities in the start- up stage need to reach full occupancy as quickly as possible in order to generate income that will not only cover operational costs once built but also help pay down construction loans. As a result, accurate projections of future revenues and costs are important as a CCRC becomes operational. Second, entrance fees and monthly fees may ultimately prove to be inadequate to cover the CCRC’s costs. CCRCs generally have to keep prices low enough to attract residents and stay competitive but high enough to meet short- and long-term costs. Determining appropriate fees can, in itself, be a complex process because it involves projecting a number of variables into the future, including occupancy levels, mortality rates, medical and labor costs, and capital improvement costs. For this reason, many CCRCs use actuarial consultants to help in these determinations. CCRCs that set fees too low may have to significantly raise entrance and other fees to meet the costs of care and future capital improvements. Fee increases can take the form of larger-than-projected monthly fees for assisted living or nursing care and fees on other miscellaneous services, both of which can affect residents’ long-term ability to pay and the competitive position of the CCRC in the marketplace. CCRCs may face other financial risks, including unforeseen events that lead to higher-than-expected costs. For example, many nonprofit CCRCs rely on property tax exemptions when estimating CCRC costs and developing CCRC projects. According to industry associations and a state regulator, however, difficult economic times are causing some municipalities to look for new sources of revenue, and some may be reevaluating property tax exemptions previously granted to CCRCs. Loss of these exemptions can be very costly; for example, industry participants attributed one recent CCRC failure in Pennsylvania in part to the loss of its property tax exemption. Erickon Retirement Commnitieas one of the lrget CCRC developer. Typiclly it built lrge non-profit CCRC fcilitie, ech with 1,500 to 2,000 nit, for middle-income reident. Erickon eablihed contrction firm to build CCRC nd gement compny to help operte itcilitie. Another prt of Erickon’ CCRC business model generlly involved leasing the lnd nd fcilitie it developed to epte indepen- dent, non-profit CCRC, which it creted. Thee non-profit wold then often eventually end p prchasing the CCRC fcilitie. A of Feuary 2010, Erickon hd developed 1 CCRC tht provided home nd ervice to pproximtely 22,000 reident. resident vacates a unit. These refunds represent substantial financial obligations that CCRCs must meet and can significantly affect operations because fees are used to maintain a certain level of liquidity, or cash on hand. CCRC officials said that refunds were usually contingent on having a new resident move into the vacated unit and that a recent reduction in occupancy levels has meant former residents and their families have had to wait longer for refunds. For example, some CCRC officials noted that due to real estate market and other factors, refunds are taking several months longer than during stronger market conditions. Erickon, however, filed for bankrptcy in 2009. Like mny CCRC, Erickon used contrction lo nd other finncing intrment to meet the coniderable cot of building CCRC fcilitie nd redy them for occncy y older Americ. According to Erickon offici, er of condition contributed to their finncil chllenge nd bankrptcy filing. Declining economic nd rel ete condition lowed the demnd for nd prchase of CCRC nit nd chllenged Erickon’ ability to re revene needed to develop CCRC. Simltneously, tightening credit mrket redced or eliminted Erickon’ ability to ccess new rce of cpitl or to retrctre or refinnce exiting lorrngement. Thee condition prevented Erickon from meeting deervice nd other CCRC expen nd led to it bankrptcy filing. Ultimtely, Erickon emerged from bankrptcy with new owner, Redwood LLC, in My 2010. Depite the ownerhip chnge, Erickon offici do not expect ny CCRC reident’ contrct or living condition to e impcted, as thoe contrct were with the CCRC themelve, which were not prt of the bankrptcy filing. CCRCs also face risks from external economic factors that are out of their control and could adversely affect occupancy levels and financial condition. First, slow real estate markets, such as those of the last several years, can make it very difficult for older Americans to sell their homes to pay CCRC entrance fees. As a result, according to CCRC providers, occupancy levels at many CCRCs have fallen over the past several years. In addition, because older Americans may be staying in their homes longer and thus moving into CCRCs at a higher age, residents may spend less time in independent living units than they had in the past. This can negatively affect CCRCs’ long-term financial condition because residents in independent living may help subsidize those living in assisted living or nursing care. Second, declining equity and credit markets, which have also been a feature of the recent financial crisis, can also affect occupancy and financial condition. During the development phase, CCRCs often depend on access to credit in order to complete construction, and reduced access to funds can be problematic. For example, CCRC and state regulatory officials suggested that tightening credit and real estate markets, combined with Erickson Retirement Communities’ reliance on borrowed funds, were the primary financial challenges that resulted in Erickson’s 2009 filing for bankruptcy protection (see sidebar on Erickson Retirement Communities). In addition, occupancy can depend on CCRCs’ ability to remain attractive to new residents by maintaining and upgrading their facilities. While the ability to maintain and upgrade facilities depends in part on long-range planning, it can also depend on access to credit. CCRC officials said that over the last several years the availability of both state financing and commercial bank financing had diminished due to tightened credit markets. Although few CCRCs have closed or declared bankruptcy over the last 20 years, recent economic conditions have negatively affected the financial condition of many facilities and highlighted some of the risks that they face. One rating firm, which produces an annual industry outlook for CCRCs, said the outlook for CCRCs in 2009 and into 2010 is negative because of their declining liquidity and other financial ratios, tightening financial markets, and difficult real estate markets. The firm also noted, however, that the negative effect of the slow real estate market and falling occupancy levels could be softened somewhat by some favorable factors, including strong demand for entrance into CCRCs, effective management practices, and favorable labor costs. To help ensure that CCRCs address the risks they face during their start-up period, seven of the eight states we reviewed used a similar application and licensing process. For example, these seven states required CCRC providers to submit detailed financial information on CCRC projects for review by regulators. Most states we reviewed also required financial feasibility studies as part of the licensing process. These studies included projected income and expense information, alternative pricing structures, and, for CCRCs planning to charge entrance fees, estimates of the CCRCs’ ability to resell its units that are based on actuarial assumptions. Among the states we reviewed that license CCRCs, some required more information from CCRCs than others. For example, California, Florida, and New York required CCRCs to conduct and provide a market study as part of its application for licensing, while others—Illinois, Pennsylvania, and Wisconsin—did not. Such studies can include descriptions of the market area and targeted consumers as well as projections of how long it might take the CCRC to reach a stable occupancy level. Pennsylvania required CCRCs to provide a market study only if one was being conducted to help obtain project financing. One state we reviewed—New York—required CCRC providers that offer Type A or B contracts to conduct an actuarial study during the licensing process to help project long-term expenses and revenues and help regulators assess financial viability over time. To help ensure that CCRCs addressed risks to their operations, states we reviewed generally required that CCRCs periodically submit financial information, but the type of information required and what they did with it varied. Of the states we reviewed that license and oversee CCRCs, most required CCRCs to submit audited financial statements each year to demonstrate their basic financial health, including balance sheet, income, and cash flow information. These statements generally reflect financial performance for the past year and provide a financial snapshot of a point in time, and are not assessments of longer-term financial trends or financial stability. To help ensure that CCRCs addressed risks to their long-term viability, a few states we reviewed required periodic actuarial studies, but the others did not. In particular, California, New York, and Texas required periodic actuarial studies, but only for CCRCs that offered contracts which incur long-term liabilities by guaranteeing health care services over the long term. One state we reviewed—Florida—did not require periodic actuarial studies but did analyze financial trend and projection data to help track the direction of the financial condition of CCRCs over time. Florida regulators said that they maintained a spreadsheet containing financial information on CCRCs dating back over a decade and used the data to develop financial trend information on each CCRC, including trends of ratios related to CCRCs’ revenues and expenses. Florida officials said that since CCRCs generally do not go from stable 1 year to financially distressed the next, their trend data enabled them to identify early on CCRCs that might be in trouble. According to industry participants, actuarial studies can help in quantifying long-term liabilities and planning for ways to meet them. For example, some said that the studies can provide CCRC management with the information needed to make appropriate plans to meet future liabilities and contractual obligations and to set appropriate prices for short- and long-term housing and care options. In addition, some noted that actuarial studies can help regulators identify potential threats to CCRCs’ long-term viability. For example, New York officials noted that requiring an actuarial study from CCRCs every 3 years provided 10-year cash flow projections and CCRC information on actuarial assets and liabilities that were critical to understanding long-term viability. According to industry participants, only an actuarial study incorporates mortality, morbidity, and other information unique to a CCRC to help it anticipate and make plans to address risks to its long-term viability, such as lower-than-expected occupancy levels and higher-than-expected costs. Without actuarial studies, they said, a CCRC may appear financially stable in the short term yet still face threats to its long-term viability. To help ensure that CCRCs have funds available to pay for expenses such as debt service and operations, most of the states we reviewed also required CCRC providers to maintain some minimum level of financial reserves. According to state regulators, the primary purpose of reserve requirements is to ensure enough time for a financially distressed CCRC to reorganize or restructure financing while keeping the CCRC operational for its residents. For example, these reserves could be used to help make debt service principal and interest payments, pay for operating expenses, or assist with difficult economic times or other types of contingencies. Reserve requirements in the states we reviewed were typically expressed in terms of total debt service payments for a time period ranging from 6 months in Illinois to 1 year in states such as California, Florida, New York, Pennsylvania, and Texas. Some states also required a reserve for operating costs that ranged from 2-½ months to 1 year. New York, by comparison, required debt service and operating cost reserves along with an additional reserve for CCRC facility repairs and replacement. One state— Wisconsin—did not have reserve requirements. Wisconsin state officials said that their statutory authority generally focused on the content of CCRC resident contracts. While these reserve requirements can provide a CCRC with enough time to work to improve financial conditions, several industry participants said that reserves are not intended to ensure viability over the long term. In addition, one industry official said that CCRCs experiencing financial difficulties are often purchased by other CCRCs. Finally, though most states required CCRCs to submit financial information, not all states we reviewed did financial examinations. According to regulatory officials, California, Florida, Illinois, New York, Pennsylvania, Texas, and Wisconsin all had the regulatory authority to financially examine CCRCs to assess financial condition or viability, but only Florida, New York, and Pennsylvania had conducted examinations. Some states also said that they maintained ongoing communication with CCRC management, particularly when regulators had any questions or needed clarification on financial documents under review. These state regulators said that the informal communication channels helped them to understand CCRC operations better than they would if they relied on periodically reported information alone. While we did not survey all 50 states as part of our review, according to one industry study, 38 states have some level of regulation specifically addressing CCRCs, while 12 states plus the District of Columbia do not. Among the 38 states that have CCRC-specific regulation, CCRCs are overseen by a variety of state departments. Some states oversee CCRCs through departments that concentrate on insurance, financial services, or banking. Other states regulate CCRCs through departments of social services, aging or elder services, or community affairs. Figure 2 provides information as of 2009 on the states that specifically regulate CCRCs, the type of department with oversight responsibility, and the number of CCRCs in each state. In addition, all nursing homes—including those that are part of a CCRC—are subject to federal oversight if they participate in Medicare or Medicaid programs. Because some states do not appear to have CCRC-specific regulations, an entity in one state might be licensed and regulated as a CCRC while a similar entity in another state may not. While we did not review laws and regulations in the states that did not appear to have specific CCRC regulations, to the extent that states do not license CCRCs and oversee their contracts, residents in those states may not receive the same protections as CCRC residents in states with such regulations. One of the eight states we reviewed—Ohio—did not specifically license or regulate CCRCs. However, an industry official from Ohio said the separate components of CCRCs operating within that state are generally regulated as if they were stand-alone entities. For example, Ohio’s Department of Health regulates assisted living and nursing home facilities. In prior work that also looked at the regulation of financial contracts across states, we have pointed out the importance of ensuring that consumers entering similar contracts receive similar regulatory protections across states. That work, which was designed to provide insights for the development of a federal financial services regulatory framework, also highlighted the importance of, among other things, providing consistent consumer protections in similar situations and ensuring consumers receive useful information and disclosures. In a recent report looking at regulation of the insurance industry, a function carried out by the states, we pointed out the importance of state regulation supporting the goals of this framework. When CCRCs obtain financing through debt instruments such as loans or bonds, creditors and bondholders often impose financial requirements and standards that are designed to ensure that CCRCs can repay the borrowed funds. For example, state regulators and industry participants said states and lenders require CCRCs to maintain levels of reserves that are intended to give the facilities enough time to meet financial challenges such as refinancing or restructuring debt. According to regulators and industry officials, lender and bondholder reserve requirements generally exceed those of state regulators. As noted earlier, most states we reviewed have reserve requirements that focus on a short period such as 6 months or a year. But a CCRC provider noted that lender and bondholder requirements are generally more stringent and may require reserve levels twice as high. In addition, bondholders may conduct analyses that appear to go beyond those used by states. For example, according to one company that facilitates financing for CCRCs, bondholders might require quarterly financial statements as well as annual statements. In addition, some nonprofit CCRCs that obtain state-based financing choose to be assessed by rating firms to help determine their ability to repay long-term debt. We reviewed one rating firm’s guidelines, which contain many quantitative and qualitative variables to assess CCRCs’ credit quality and financial solvency. The guidelines include financial ratio analysis, trend analysis of financial ratios, review of cash flow statements, and the use of recent actuarial studies for CCRCs offering Type A contracts as well as certain qualitative factors—such as strength of management and governance—to make assessments about long-term viability. Officials from the rating firm noted that their metrics were more focused on CCRCs’ ability to pay on their bond obligations over the long term. Some CCRCs may also choose to become accredited by an independent organization. As of April 2010, 300 CCRCs had become accredited by the Continuing Care Accreditation Commission (CCAC), according to a commission official. Accreditation involves an initial review that assesses CCRCs on an extensive set of standards. For example, the financial aspects of the accreditation process include analyses of many financial ratios, including profitability, liquidity, and capital structure, to assess a CCRC’s financial solvency, identify trends, and compare them to industry benchmarks. While accreditation standards do not require periodic actuarial studies, according to CCAC officials CCRCs are expected to use actuarial and other information to appropriately set their fees. Two CCRC providers and accreditation officials suggested CCAC’s standards represent best practices and guidelines for CCRCs and they help to assess short- and long-term financial stability. State regulators from the eight states we reviewed generally reported that their regulations and regulatory efforts were adequate to properly oversee the financial condition of CCRCs. Some suggested that the small number of CCRCs that were financially distressed, insolvent, or had filed for bankruptcy pointed to the adequacy of state regulatory oversight. In addition, officials from one state noted that they periodically review audited financial statements and other required information, and have the authority to do on-site inspections of CCRCs’ books and records. However, they noted that audited financial statements generally do not contain information that would cause further review through inspection. One state agency had broader statutory authority but an official there said that financially regulating CCRCs was not their central mission. Another state official commented that they lacked the staffing resources to do more than review audited financial statements. Officials from one residents’ association we spoke to expressed concerns about the overall financial condition of CCRCs and how it affects their housing and care, while another believed regulatory requirements were generally adequate. Residents’ association officials who expressed concerns said regulators needed to provide more overall financial oversight to compensate for the short-term focus that most CCRCs have on their financial solvency. They said that most CCRCs tended to emphasize the availability of liquid assets to cover operating costs such as debt servicing as the most significant indicator of financial health. The officials noted that this approach emphasized short-term liquidity and current asset and liability information and did not sufficiently consider long-term liquidity, liabilities, capital planning, and budgeting. Another state residents’ association official provided a different view and said that its state statute established strict financial requirements that helped discourage speculative CCRC operators from entering the market and encouraged long-term stability in the state’s CCRC market. CCRC providers did not convey strong positive or negative views about the strength or effectiveness of CCRC regulation but did provide various insights. One CCRC provider said that the extent and effectiveness of regulators’ financial oversight of CCRCs varied from state to state but noted that for oversight to be effective, states would need specific expertise. The provider also felt that state agencies that had devoted few resources to CCRC oversight might lack the requisite expertise. Another CCRC provider said its state regulator required each provider to annually submit a report containing a number of financial indicators and expressed hope that the regulator would use the data to create a database to monitor financial trends. The provider said that the statutes were adequate, noting that few CCRCs had failed in their state. By contrast, actuaries GAO spoke with said that, overall, only a few states nationwide were appropriately using actuarial studies to assess CCRC providers and that many states were using very little actuarial information for financial oversight. Actuaries said this situation reflected the wide variety of state laws and regulations on CCRCs and noted that states that did not require actuarial studies could have a difficult time assessing the adequacy of CCRCs’ short- and long-term pricing structures and long-term financial position. Although CCRCs offer older Americans the benefit of long-term residence and care in a single community, residents face a number of financial risks in the course of their relationship with their CCRC. For example, residents could lose the refundable portion of their entrance fees—which may amount to hundreds of thousand of dollars or more—if a CCRC encountered financial difficulties. According to state officials in two states and a CCRC expert, residents are at a disadvantage because any claim they have on a CCRC that is forced into bankruptcy is subordinate to the claims of secured creditors, such as tax-exempt bondholders and mortgage lenders. As a result, residents are grouped with all other unsecured creditors, which generally include everyone who does business with the CCRC, for recouping any financial losses in the case of CCRC financial distress. We identified no national data that would reflect the incidence of such losses, and several state officials believed that they are rare. For example, a California official told us that there had been at least two situations in the 1990s in which California residents had nearly lost their entrance fees but that these situations had been resolved in the residents’ favor. However, Pennsylvania officials told us about a financially insolvent CCRC in Pennsylvania whose residents lost the refundable portion of their entrance fees in 2009 when the facility was sold to a new operator. According to the officials, the CCRC became financially distressed and filed for bankruptcy after it lost its tax-exempt status and became liable for substantial state and local taxes. As part of the negotiations to fulfill residents’ contracts and maintain services under the new owner, residents relinquished the refundable portion of their entrance fees. The state officials noted that this concession had limited residents’ ability to move to another CCRC, since they would no longer receive a portion of their entrance fee to pay the entrance fee at the new facility. In addition, residents’ heirs were deprived of the refundable portion of the entrance fee. Residents can also face greater-than-expected increases in monthly and other fees that can erode their existing assets or make the CCRC unaffordable to them. Officials of CCRCs, an expert, and resident advocates told us that CCRC residents were at risk of having to pay monthly fees that rise beyond their ability to pay. According to some state and CCRC officials we contacted, CCRCs in financial distress may need to increase monthly fees beyond the typical yearly increase outlined in the contract. Such increases can occur for a number of reasons—for example, to continue to operate when occupancy rates drop, to make necessary or deferred physical improvements, cover unplanned increases in operational expenses such as rising labor costs, or to keep the facility competitive in order to attract new residents. Residents may be living on a fixed income and may not be able to afford these increases, especially over an extended period. CCRC providers in Florida and Wisconsin said that they had had residents who exhausted their assets earlier than planned because of monthly fee increases. According to CCRC operators, residents are not generally at risk of being required to leave a CCRC when they exhaust their assets but instead use the refundable portion of their entrance fee, if there is one, to cover monthly costs. When these funds are gone, the CCRC uses charitable funds, voluntarily contributed by other CCRC residents, to support the residents. CCRC residents also face the risk of losing their residence and familiar surroundings in the event of a CCRC closure. According to CCRC and elder care experts, closures occur for a number of reasons, including bankruptcy or an operator’s decision to consolidate multiple CCRCs and close less profitable locations. Although state officials and other CCRC experts indicated that such events are rare, they have happened. For example, a residents’ advocate and state regulators told us that in 2007, a CCRC in California that had lost $11 million over 10 years closed due to consistently low occupancy rates. Several residents were dissatisfied with the CCRC’s handling of their contracts and resisted the proposed transfer to an alternate facility, and filed a lawsuit against the facility. Ultimately, they were removed from their residence when the CCRC closed. According to CCRC and elder care experts, residents who must move when their CCRC closes face the risk of trauma during and after the transfer to a new CCRC facility. One resident advocacy group told us that a forced move can be very disruptive to members of a CCRC population, in some cases with consequences for their physical and emotional well- being. Residents may not be satisfied initially—or over the long term—with the CCRC into which they have moved and may have limited financial and other recourse. For example, dissatisfied residents may have limited ability to move out. According to an expert on CCRCs, some residents may experience “buyer’s remorse” after entering a CCRC if the community, services, or other aspects of the CCRC do not match their initial perceptions. These advocates told us that residents were often focused on certain elements of care and housing, such as amenities and culture, when choosing a CCRC and might not, for example, pay enough attention to financial information that could affect them. Residents who wish to move, for instance, may find that the contractually designated rescission period has ended and that moving will result in significant fines or reductions to the refundable portion of their entrance fee. But these financial losses can limit their choice of other long-term care options that require a similar investment. Residents also face the risk of being transferred involuntarily from one level of care to another or of not being able to obtain on-site assisted living or nursing care when needed. Policies regarding admission and discharge from different levels of care can be subject to state law, but this decision can be a point of contention as well. One 2009 study states that relocation within a CCRC and between levels of health care is one of the most stressful events older adults face because it threatens their autonomy—that is, their ability to make decisions for themselves. Individuals representing various parts of the CCRC community told us that the transfer from one level of care to another is often regulated by state law and that, while residents may disagree with the decision to transfer, the CCRC, in some cases must move them over their objections. CCRC residents generally enjoy continuous residency in the same community regardless of the level of care. However, state regulators and resident advocates told us that while many CCRCs without space in assisted living or skilled nursing guarantee space to residents in a nearby facility for no additional cost, residents can face additional stress due to the transfer outside of their contracted community. Residents’ dissatisfaction with CCRC management, policies, or services can grow out of a lack of full understanding of contracts and related disclosure documents or may result from ambiguities in the contract, according to representatives of CCRC management and resident organizations. Although state officials told us that many CCRC residents are highly affluent and educated consumers, others noted that some consumers do not understand the contractual provisions or disclosures. Further, experts and resident advocacy groups said that the contracts are very lengthy and detailed, containing terms that are difficult to understand and potential ambiguities, and they noted that some residents might not fully understand their rights and responsibilities or the obligations of the CCRC. Finally, a statewide resident’s association in Florida noted that some residents have become unhappy with service or policy changes made through the residents’ handbook that they believed were contractually guaranteed. CCRC contracts and the residents’ handbook are different documents and some residents do not fully appreciate the difference until an issue arises. Further, some CCRCs may impose additional fees during times of financial hardship. According to Florida CCRC operators, for example, CCRCs may impose fees on services that were previously free, such as transportation to activities in the local community. According to a CCRC industry study, of the 38 states that have some level of regulation specifically addressing CCRCs, 34 states collect and review the standard form contract that the CCRC enters into with residents. Based on our analysis of CCRC industry data, about four out of every five CCRCs are located in states that collect and review these contracts. The industry summary also indicates that, of the 38 states with CCRC-specific licensure laws, 30 require that CCRC contracts include a provision that confers on residents a “cooling off” period in which the resident has the right to cancel a contract and receive a full refund of the entrance fees, less certain costs. The prescribed periods during which such cancellation rights may be exercised range from prior to occupancy to as long as 1 year after occupancy, and they allow residents to cancel the contract without penalty or forfeiture of previously paid funds. Of the eight states we reviewed, seven require that CCRC license applicants, as part of the licensure process, submit a copy of the contract form to be entered into with residents. In some of those states the contract form must be approved by the state. A few of the states we reviewed required that the contract be legible or written in clear and understandable language. Regulators from New York, Pennsylvania, and Wisconsin said that they review the contract for understandable language. Seven of the state laws we reviewed also require CCRC contracts to provide for a minimum time period in which a resident has the right to cancel the CCRC contract without forfeiting their paid entrance fees. Such cancellation periods vary across these seven states from 7 days after signing the contract to 90 days after occupancy. States we reviewed varied in how they collected and reviewed the contract. For example, officials in Wisconsin told us that they played an active role in ensuring that the contract contained the items required by law and met readability criteria. In some states, such as Pennsylvania, staff uses checklists or other tools to ensure that the content meets state requirements and readability standards. Officials in Wisconsin told us that contract reviews there were less structured and that staff generally used their own judgment to decide whether contracts were deceptive, incomplete, or obscure. States can also levy significant penalties if they find that a CCRC uses a contract that has not been reviewed and approved by the state. For example, California officials told us that if they found that a resident had an unapproved contract, the provider would be required to return all entrance and monthly fees (in total, including the costs incurred for services) to the resident. The state can also revoke the CCRC’s certificate of authority, rendering the facility unable to accept entrance fees or offer new contracts. Some states directly protect the financial interests of residents by (1) establishing requirements for fees and deposits to be escrowed, (2) addressing criteria for monthly fee increases, or (3) placing liens on CCRC assets on behalf of residents or confer a preferred status on resident claims on such assets in the event of liquidation. As table 2 shows, escrow requirements varied among the eight states we reviewed but in general mandated setting aside some portion of the down payment or entrance fee for all units in a CCRC. The portion of down payments or entrance fees required to be set aside in an escrow account varied among the eight states we reviewed. Escrow requirements are aimed at ensuring the stability of a CCRC during start-up and construction and its ability to provide the services set out in the contract with residents. Six of the states we reviewed required that CCRCs escrow some portion of consumer deposits or entrance fees it received and such funds are not released to the CCRC until ascertainment of certain benchmarks, such as a certain percentage of construction completed or long-term financing committed. Some of the states we reviewed addressed increases in CCRCs’ monthly fees or required CCRCs to justify increases to residents. As table 2 shows, Florida requires CCRC providers that raise monthly maintenance fees above the consumer price index to provide an explanation for the increase to CCRC residents. In California, regulators address fee increases by requiring CCRCs to include in every continuing care contract a provision that states that changes in monthly care fees shall be based on projected costs, prior year per capita costs, and economic indicators. New York law provides that monthly fee increases beyond the previously approved rating methodology must again be approved by the Superintendent of Insurance. According to the industry summary, 12 out of 38 states that license CCRCs have the authority to place a lien or another form of protection, such as a surety bond or preferred claim, to ensure that residents have some financial recourse if a CCRC enters bankruptcy. Of the eight states that GAO reviewed in more detail, the regulators of five indicated that they place a lien for the benefit of the residents, or that the residents have a preferred claim on the assets of the CCRC facility in the event of liquidation. In Texas, for example, a lien attaches to facilities and assets of the CCRC provider when a resident moves into a facility. In Pennsylvania, the regulating department has the option of filing a lien on property or assets of a provider or facility to secure the obligations under CCRC contracts. According to one expert and some regulators, preferred claims and liens offer limited protection; however, as such claims are generally subordinate to those of all other secured creditors, such as bondholders and commercial lenders. Further, some of the states we reviewed required CCRCs to communicate with regulators and residents before a potential closure in order to reduce the financial and other impacts on residents. In California, CCRCs that are slated to close must submit plans to regulators that generally address refunds and include a time frame for transferring displaced residents to other facilities. In Florida, if a CCRC ceases to operate due to liquidation or pending liquidation, regulators use the unencumbered assets of the CCRC to provide relocation and other assistance to displaced residents. States may also require that CCRCs disclose information pertaining to the financial condition of the CCRC. According to the regulatory history and literature we reviewed, requiring the disclosure of information about the past, present, and projected future financial conditions of CCRCs allows current and prospective residents to make informed decisions before entering a facility. Among states we reviewed that had such a requirement, we found that the format, extent, detail, and timing of these disclosures varied considerably. For example, Illinois state law simply requires that a CCRC provide residents with a statement that reflects the provider’s financial condition and that, at a minimum, includes disclosure of short- term assets and liabilities. On the other hand, the Florida statute requires CCRCs to file an annual report in such form as the regulating entity prescribes, and such statement must include, at a minimum, an audited balance sheet, a statement of income and expenses, and a statement of changes in cash flow, as well as a list of reserve assets. The extent of additional disclosure requirements also varied across the states we reviewed. As table 3 indicates, disclosures can include information with significant financial implications to residents, such as fee schedules, a history of fee increases, refund policies, and the status of residents’ claim on the assets and facility of a CCRC in case of bankruptcy or insolvency. For example, California requires CCRCs to provide residents with a history of fee increases over the past 5 years. California, Florida, and New York require that residents receive advance notice of any increases or changes to monthly fees. California and Wisconsin require CCRCs to disclose to residents that any claims they have against the CCRC in the event of its liquidation may be subordinate to secured creditors, such as mortgage lenders. Statutory provisions regarding the delivery and timing of disclosures to prospective residents also varied among the states we reviewed. For example, while the states we reviewed required providers to disclose financial information to prospective residents prior to signing the CCRC contract, five states we reviewed also required that such information be subsequently disclosed periodically to residents. Exactly where and how the information must be disclosed can vary as well. For example, some states require that financial information be posted in public areas of the CCRC, others require providers to convene periodic meetings with residents to discuss the financial condition of the facility, and still others that financial information is made available to residents upon request. A New York state official said the state posts the results of any CCRC examinations on a Web site so that consumers can access the information and compare results across CCRCs. Some of the states we reviewed performed on-site audits and examinations of CCRCs on a periodic basis to help ensure consumer protections, including the disclosure of important financial information. The states we reviewed generally have discretionary authority to conduct on-site audits or examinations, but some are required to conducted periodic audit or examinations. For example, the Florida regulatory authority is required to conduct on-site examinations at least once every 3 years and may visit more frequently if regulators receive complaints from residents. Such on-site exams may include inspections of financial information, contracts, and disclosures and conversations with staff, management, and residents. Other states said that they had the authority to conduct on-site investigations but had not done so. For example, regulators in Texas said that they have not yet faced an issue with a CCRC that would compel them to conduct an examination or investigation, but historically have exercised other regulatory authority over CCRCs for financial oversight. Regulatory officials told us that the state had relied on documents submitted by CCRCs and has called CCRC management on an informal basis to obtain additional information or clarification when necessary. Other requirements mandate disclosure of policies that may have important implications for the length and quality of residents’ stay at their CCRC. Some states we reviewed required that CCRCs explicitly disclose policies regarding (1) the conditions under which a resident could remain in the event the resident experiences financial difficulties, and (2) conditions under which residents would be required to move to a higher level of care. For example, Pennsylvania requires that each CCRC contract describe the circumstances under which a resident may remain at the facility in the event the resident has financial difficulties. California specifically mandates that CCRCs offering life care contracts subsidize residents who are unable to pay their monthly or other fees, provided the financial need of the resident does not arise from the resident’s own action to divest of his or her assets. Seven of the states we reviewed also have specific, nonfinancial provisions that must be contained in the residential contract or disclosure statement, but these provisions varied, as shown in table 4. For instance, some states not only require disclosure of certain policies, but specifically prescribe minimum procedures that CCRCs must follow while other states require that certain policies be disclosed to residents but do not prescribe the substance of those policies. For example, in addition to requiring that the resident contract describe the procedures and conditions under which a resident may be transferred from a designated living unit, the applicable California statute prescribes minimum transfer procedures. These policies must be disclosed at the time that the contract is signed in an effort to ensure that residents understand how they will move through the continuum of care. Florida and New York also require that residents be advised of policies for transferring residents among the levels of care but do not specifically set those policies. According to an expert, such policies have been a point of friction between residents and CCRC management. As table 4 indicates, some of the states we reviewed did not have such certain disclosure requirements. Some state regulations are aimed at ensuring that residents can communicate their concerns to management and receive ongoing financial and nonfinancial information concerning a CCRC by forming residents’ councils and creating a residents’ bill of rights. Six of the states that we reviewed required that residents of a CCRC be allowed and encouraged to form groups in order to communicate with management, including Ohio which has no other CCRC specific law. CCRC management coordinates with representatives from the resident groups to communicate information on the facility’s financial condition, fee increases, policy changes, and other issues. In Florida and California, for example, the resident councils are the designated recipients of mandated disclosures such as reports on the CCRCs financial condition, and fee structure. Two states we reviewed prescribed a statutory residents’ bill of rights and required CCRCs provide a copy of such rights to residents prior to their occupancy. Finally, some of the state regulators we interviewed indicated that they require CCRCs to provide marketing and advertising materials for approval. One regulator we spoke with commented that claims or incidents of false advertising were rare to nonexistent. Residents had not highlighted this issue as a major concern for consumers. Based on our interviews with state officials, we found no assessments of the effectiveness of state regulations in protecting consumers at either the national level or the state level, and state officials, resident advocates, and experts expressed a wide range of opinions on the adequacy of state law to protect consumers. First, state officials and others noted the importance of certain CCRC law provisions. For example, regulatory officials in Florida said that requiring CCRCs to provide financial information publicly through a state was necessary, because without such information residents would be unable to compare in-state CCRCs in a uniform manner and regulators would be unable to ensure that residents had enough information to make an informed choice of facilities. Members of a national association of CCRC residents expressed concern that some state laws might not address the terms of the residency contract, including the refundable portion of the entrance fee and residents’ rights within the contract, such as the ability to renegotiate fees in the event of a CCRC sale due to financial insolvency. Additionally, members of this association expressed concern that CCRCs in financial difficulties might not notify residents if states did not require CCRCs to provide disclosures regarding CCRCs’ financial condition. Seven of the eight states we contacted did have a CCRC law that required such disclosure, but one—Ohio—did not. Other experts and resident advocates we interviewed pointed out possible further improvements to state laws. For example, a law professor with expertise on the Pennsylvania law told us that states should take a greater role in facilitating the ability of prospective residents to access information about CCRCs for purposes of making meaningful comparisons. For example, states could publish information about the financial and operating conditions of CCRCs in a statewide database so that CCRC residents could make comparisons across the statewide industry. The law professor advocates that states could publish information about (1) the numbers and types of complaints about CCRCs, (2) comparative information on entrance fees and monthly fees, and (3) instances of the state requiring a CCRC provider to give revised financial projections. Similarly, representatives of two statewide resident’s groups said that residents would like to see states require that CCRCs provide disclosures on their financial condition along with an extensive, understandable explanation of the disclosure. Finally, although state laws differ significantly in breadth and detail, it is not clear that CCRC residents in states with less stringent requirements are necessarily at greater risk than residents in heavily regulated states. In one state, regulators told us that despite extensive CCRC regulation, a CCRC bankruptcy cost residents the refundable portion of their entrance fees. In another state, regulators said that, while the CCRC law is not as extensive as in other states, they are not aware of any CCRCs that have faced bankruptcies or failures. In part, protection may come from the CCRCs themselves. In our contacts with CCRCs, we found that some took steps that went well beyond what the state law required. The Illinois statute, for instance, requires comparatively fewer disclosures than other states, such as California and Florida, and, according to an Illinois regulatory official, does not mandate that CCRCs provide financial information on an ongoing basis. Nonetheless, officials from CCRCs in Wisconsin and Illinois told us that they provided additional disclosures, beyond what is required by state law. Representatives from one CCRC told us that they offered prospective residents a lengthy “discovery phase” so that residents were not unpleasantly surprised after signing the contract or moving in. In this discovery phase, prospective residents discussed their expectations with staff, had a meal at the CCRC, and visited with current residents and staff. The CCRC had also established a residents’ finance committee that received ongoing budget and other financial information and gave residents a vehicle for communicating with management. Finally, the CCRC provided a quarterly operating budget to each resident and made other financial information available upon request. CCRC officials in several other states, including California and Pennsylvania, told us they exceed statutory requirements. Nonetheless, because we visited only seven CCRCs in the course of our work, we do not know how widespread such actions are. CCRCs can help ensure that older Americans have access to housing and health care in a single community as they age. However, entering a CCRC often means committing a large portion of one’s assets, and while CCRC bankruptcies have been rare, and few residents have lost their housing or their entrance fees, a CCRC failure could put residents in a difficult financial situation. As a result, residents have a strong interest in fully understanding the long–term viability of their CCRC and their contract with it. However, resident contracts and CCRC finances are often complex, and prospective residents may find it challenging to evaluate the risks they face or the likelihood that a particular CCRC has done sufficient long-range financial and operational planning. Such difficulties, coupled with the stress that recent economic events have placed on CCRC finances, underscore the importance of regulators being vigilant in their efforts to monitor CCRCs’ long-term viability and protect consumers. CCRCs as entities are not regulated by the federal government, and, according to an industry study, 12 states do not appear to have CCRC- specific regulations. As a result, an entity that might be licensed and regulated as a CCRC in some states may not be in others, and resident contracts that might receive regulatory scrutiny in some states may not in others. In other work looking at the regulation of financial contracts across states, we have pointed out the importance of ensuring that citizens entering similar contracts receive similar regulatory protections across states. Because there is no federal regulator for CCRCs, we are not making a recommendation for specific action. However, the potential risks to residents that result from committing a considerable amount of money to a CCRC highlight the importance of states being vigilant in their efforts to help ensure that CCRC residents’ long-term interests are adequately protected. Such efforts will only become more important as the number of older Americans requiring assisted living and nursing home care increases. We provided a draft of the report to the Department of Health and Human Services and the National Association of Insurance Commissioners, but neither commented on the draft. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Chief Executive Officer of the National Association of Insurance Commissioners, the Secretary of the U.S. Department of Health and Human Services, and others. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff has any questions regarding this report, please contact us at (202) 512-7022 or cackleya@gao.gov or (202) 512-5491 or bovbjergb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix II. To address concerns about the risks and regulation of CCRCs, we have been asked to (1) describe how CCRCs operate and what financial risks are associated with their operation and establishment, (2) describe how state laws address these risks and identifies what is known about how adequately they protect CCRCs’ financial condition, (3) describe risks that CCRC residents face; and (4) describe how state laws address these risks and identifies what is known about their adequacy. To describe how CCRCs are established and operated, methods CCRCs use for initial financing and on-going operations, and what initial and on- going risks CCRCs may experience, we interviewed CCRC providers, CCRC industry associations—including the American Association for Homes and Services for the Aging (AAHSA), American Seniors Housing Association (ASHA), National Association of Insurance Commissioners (NAIC), and National Center for Assisted Living (NCAL)—and two attorneys who specialize in housing and health care for older Americans. In addition, we met with officials from eight CCRC facilities. We selected these providers based on the providers’ geographic diversity, facility size, non-profit or for-profit status, type of contracts offered, and income or market segment served. We also met with state CCRC regulators from eight states—California, Florida, Illinois, New York, Ohio, Pennsylvania, Texas, and Wisconsin. We selected these states due to the states’ geographic diversity, CCRC population size, and type of state regulatory department with CCRC oversight responsibility. Because we judgmentally selected the states and CCRCs we reviewed, we cannot generalize information we obtained to other states or CCRCs. In addition, we reviewed literature and academic articles by experts in the senior living industry. To describe what state laws exist to ensure CCRCs’ financial stability, and what is known about how adequately they protect CCRCs financial condition, in the eight states we selected we reviewed and analyzed state CCRC laws that govern the financial aspects of CCRC licensing and periodic state oversight, and met with selected state regulatory officials. In addition, we met with industry associations, CCRC providers, and two attorneys who specialized in housing and health care for older Americans. We also met with two actuaries, two actuarial industry associations, and members of CCRC residents’ associations that work with CCRC management on behalf of older Americans who reside in CCRCs. To describe what risks CCRC consumers face, as well as what state laws exist to protect consumers from financial and other risks, and what is known about how adequately they protect consumers, in the states we selected we reviewed and analyzed state laws pertaining to specifically to CCRCs that are designed to inform and protect consumers, and met with selected state regulatory officials. We also reviewed summary information on laws and regulations across all states that was compiled by an industry association. We also reviewed examples of CCRC disclosures and other information provided by CCRCs in states we reviewed. In addition, we met with industry associations, CCRC providers, and two attorneys who specialize in housing and health care for older Americans. In addition, we met with members of CCRC residents’ associations that work with CCRC management on behalf of older Americans who reside in CCRCs. We conducted this performance audit from June 2009 to June 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contacts named above, Patrick Ward (Assistant Director), Clarita Mrena (Assistant Director), Joe Applebaum, Emily Chalmers, Erin Cohen, Andrew Curry, Mike Hartnett, Marc Molino, Walter Ochinko, Angela Pun, and Steve Ruszczyk made key contributions to this report. | A growing number of older Americans are choosing continuing care retirement communities (CCRC) to help ensure that their finances in retirement will cover the cost of housing and care they may require. However, recent economic conditions have placed financial stress on some CCRCs. GAO was asked to (1) describe how CCRCs operate and the risks they face, (2) describe how state laws address these risks, (3) describe risks that CCRC residents face, and (4) describe how state laws address these risks. To review these areas, GAO analyzed state statutory provisions pertaining to CCRCs with respect to financial oversight and consumer protection, met with selected state regulators, and interviewed CCRC providers, resident's associations, and consumer groups. While GAO is not recommending specific action at this time, the potential risks to CCRC residents--as well as the potential for this industry to grow--highlight the importance of states being vigilant in their efforts to help ensure adequate consumer protections for residents. GAO provided a draft copy of this report to the Department of Health and Human Services and the National Association of Insurance Commissioners for review, but neither commented on the draft. CCRCs can benefit older Americans by allowing them to move among and through independent living, assisted living, and skilled nursing care in one community. They offer a range of contract types and fees that are designed to provide long-term care and transfer different degrees of the risk of future cost increases from the resident to the CCRC. Developing CCRCs can be a lengthy, complex process that requires significant long-term financing and accurate revenue and cost projections. Once operational, risks to long-term viability include declining occupancy and unexpected cost increases. While few CCRCs have failed, challenging economic and real estate market conditions have negatively affected some CCRCs' occupancy and financial condition. Seven of the eight states GAO reviewed had CCRC-specific regulations, and these states varied in the extent to which they helped ensure that CCRCs addressed risks to their long-term viability. For example, while each licensed and required periodic financial information from CCRCs, only four either examined trended financial data or required periodic actuarial reviews. The lack of a long-term focus creates a potential mismatch with residents' concerns over their CCRCs' long-term viability. CCRC bondholders and rating agencies, which focus on long-term viability, often place requirements on CCRCs that go beyond those used by states in their licensing and oversight activities. Regulators and CCRC providers GAO spoke with generally believed that current regulations were adequate, but some consumer groups felt more comprehensive oversight was needed. While CCRCs offer long-term residence and care in the same community, residents can still face considerable risk. For example, CCRC financial difficulties can lead to unexpected increases in residents' monthly fees. And while CCRC bankruptcies or closures have been relatively rare, and residents have generally not been forced to leave in such cases, should a CCRC failure occur, it could cause residents to lose all or part of their entrance fee. Residents can also become dissatisfied if CCRC policies or operations fall short of residents' expectations or there is a change in arrangements thought to be contractually guaranteed, such as charging residents for services that were previously free. Most of the states GAO reviewed take steps to protect the interests of CCRC residents, such as requiring the escrow of entrance fees and mandating certain disclosures. For example, a number require contracts to be readable, but not all review the content of contracts even though some industry participants questioned residents' ability to fully understand them. Also, not all require disclosure of policies likely to have a significant impact on residents' satisfaction, such as policies for moving between levels of care. According to an industry study, 12 states do not have CCRC-specific regulations, meaning an entity in 1 state may be subject to such regulations while a similar entity in another state may not, and consumers in some states may not receive the same protections as those in others. In contrast, some CCRCs voluntarily exceed disclosures and protections required by state regulations. |
Illegal immigration has long been an important issue in California, which historically has been estimated to be the state of residence for nearly half of this country’s illegal aliens. Illegal aliens are a concern not only because they are breaking immigration laws, but also because their presence affects a wide range of issues of public concern. These issues include the government costs of providing benefits and services to illegal aliens and the impact illegal aliens’ presence has on the employment of U.S. workers. In an effort to reduce the size of the nation’s illegal alien population, estimated at 3 million to 5 million in 1986, the Congress enacted the Immigration Reform and Control Act of 1986 (IRCA). IRCA reduced the size of the illegal alien population by granting legal status to certain aliens already in the country and attempted to deter the inflow of illegal aliens by prohibiting employers from hiring any alien not authorized to work. Despite a brief drop in illegal entries to the United States after IRCA was enacted, the size of the illegal alien population is now estimated to have exceeded the lower bound of the pre-IRCA estimate. INS and the Bureau of the Census estimated the population of illegal aliens ranged from 3.4 million to 3.8 million in 1992. At the same time, governments at all levels began experiencing fiscal crises that heightened public concerns about the costs of providing benefits and services to illegal aliens. Illegal aliens are not eligible for most federal benefit programs, including Supplemental Security Income, Aid to Families With Dependent Children (AFDC), food stamps, unemployment compensation, and financial assistance for higher education. However, they may receive certain benefits that do not require legal immigration status as a condition of eligibility, such as Head Start and the Special Supplemental Food Program for Women, Infants, and Children. Furthermore, illegal aliens may apply for AFDC and food stamps on behalf of their U.S. citizen children. Though it is the child and not the parent in such cases who qualifies for the programs, benefits help support the child’s family. Education, health care, and criminal justice are the major areas in which state and local governments incur costs for illegal aliens. Regarding education, the U.S. Supreme Court has held that states are prohibited from denying equal access to public elementary and secondary schools to illegal alien children. State and local governments bear over 90 percent of the cost of elementary and secondary education. To provide for certain medical services, the Congress in 1986 revised the Social Security Act to stipulate that illegal aliens are eligible for emergency services, including childbirth, under the Medicaid program. The federal government and the state of California each pay 50 percent of the cost of these benefits for illegal aliens in California. In California and New York, illegal aliens are also eligible to receive Medicaid prenatal services. States also incur costs for incarcerating illegal alien felons in state prisons and supervising those released on parole. Section 501 of IRCA authorizes the Attorney General to reimburse states for the cost of incarcerating illegal aliens convicted of state felonies. Illegal aliens generate revenues as well as costs; these revenues offset some of the costs that governments incur. Research studies indicate that illegal aliens do pay taxes, including federal and state income taxes, Social Security taxes, and sales, gasoline, and property taxes. Researchers disagree on the amount of the revenues illegal aliens generate and the extent to which these revenues offset government costs for benefits and services. However, they agree that the fiscal burden for aliens overall, including illegal aliens, falls most heavily on state and, especially, on local governments and that the federal government receives a large share of the taxes paid by aliens. To examine the costs of elementary and secondary education, Medicaid, and adult incarceration associated with illegal aliens residing in California, we evaluated the reasonableness of the assumptions and methodologies underlying the cost estimates published by the state of California in its January and September 1994 studies and the Urban Institute in its Fiscal Impacts study. We also reviewed the revenue estimates for illegal aliens contained in California’s September study and the Fiscal Impacts study. (California’s January 1994 study did not include revenue estimates.) The California study included estimates for 13 types of federal, state, and local revenues; the Fiscal Impacts study’s estimates were limited to 3 types of revenues. With assistance from Urban Institute researchers, we used the Fiscal Impacts study and another study published by the Urban Institute to extrapolate estimates for the remaining 10 types of revenues. This enabled us to compare the revenue estimates in the California and Fiscal Impacts studies. (See app. I for a detailed discussion of the methodology we used to develop these additional revenue estimates.) We convened a panel of experts in May 1994 to obtain their opinions regarding the reasonableness of California’s January 1994 estimates and the underlying methodologies, and interviewed state officials and private researchers. (See app. II for a list of the researchers we consulted.) In conjunction with related work we have done for several congressional requesters on the national fiscal impact of illegal aliens, we also examined the relevant research on the costs and revenues—at all levels of government—associated with illegal aliens. Some of the issues raised in these studies were relevant to our review, and we have incorporated them in our analysis. Assessing California’s cost estimates was complicated by the fact that the state’s estimates are for California fiscal year 1994-95. That is, the estimates are projections of future costs and are only valid to the extent that the growth trends assumed in the projections hold true. We did not assess the validity of the growth trends. In addition, we did not independently verify California’s administrative data for Medicaid and incarceration because we had no reason to believe that the data on expenditures and number of recipients in these programs presented any special concerns about reliability. We did our work between April and September 1994 in accordance with generally accepted government auditing standards. As of September 1994, California estimated that it will spend $2.35 billion on elementary and secondary education, Medicaid, and adult incarceration for illegal aliens in fiscal year 1994-95. California officials believe that these three programs represent the state’s highest costs for illegal aliens. This estimate is $80 million lower than California’s January 1994 estimate primarily because the education estimate was reduced. In the September estimate, California reduced its projections of the numbers of illegal aliens who will receive education or Medicaid services, or be incarcerated in state prisons. At the same time, however, this new estimate added in administrative costs not previously included and for education and adult incarceration, added capital costs. The net effect of these adjustments is shown in table 1. The Urban Institute’s Fiscal Impacts study estimated costs lower than California’s estimates for all three programs (see table 1). This is in part because the Fiscal Impacts study estimated costs for earlier years—the education estimate was for the 1993-94 school year; Medicaid, for fiscal year 1992-93; and adult incarceration, for 1994. Other reasons for the lower estimates in the Fiscal Impacts study varied by program, as described in the following sections. The cost estimates in the California and Fiscal Impacts studies are questionable because of the limited direct data available on illegal aliens and certain assumptions made by the studies. For example, estimates of the cost of education—the single largest cost associated with illegal aliens—are based entirely on assumptions about the size and characteristics of the illegal alien population. However, by combining selected data and assumptions from both California’s September 1994 estimates and the Fiscal Impacts study, we developed adjusted estimates for education and adult incarceration that we believe are more reasonable than either study’s original estimates. We did not adjust the state’s Medicaid estimate because the necessary data are not currently available. It is important to note that none of the estimates of education or incarceration costs represents the amount that would actually be saved if California did not educate or incarcerate illegal aliens. This is because the estimates are based on mean costs: total cost divided by total number of users. Mean costs include both variable costs, which are affected by the number of individuals using the service, and fixed costs—such as certain administrative costs—which are not. The amount that would be saved if illegal aliens did not receive these services could either be less than the mean costs or greater (for example, if new schools would otherwise have to be built). The state of California now estimates that it will spend $1.5 billion to educate illegal alien children in fiscal year 1994-95. The Fiscal Impacts study estimated California’s education costs at $1.3 billion for school year 1993-94. The Fiscal Impacts estimate was lower not only because it covered an earlier year, but also because the study relied on a different data source to develop its per pupil cost figure. Selecting the components of each estimate that we believe are more reasonable, we adjusted California’s fiscal year 1994-95 estimate upward to $1.6 billion. The education cost estimates were derived by multiplying estimates of the following components: (1) the size of the state’s illegal alien population, (2) the percentage of this population that is of school age, (3) the percentage of school-aged illegal aliens enrolled in school, (4) the percentage of school days actually attended, and (5) the statewide average cost per pupil. The studies used an indirect method to estimate the number of illegal alien children in school because school districts do not collect information on the immigration status of students. According to California state officials, many school districts believe the U.S. Supreme Court decision, Plyler v. Doe, prohibits them from asking about immigration status. To develop each of the cost components, the state of California and Urban Institute researchers relied on research studies and published estimates. For their estimates of the illegal alien population, California’s September 1994 study and the Fiscal Impacts study used recently revised INS population estimates; the small difference between the two estimates can be explained by the different years being estimated (see table 2). For the adjusted estimate, we used California’s September estimate of 1.7 million illegal aliens because it is for the same time period (fiscal year 1994-95). The state had previously estimated its illegal alien population at 2.3 million—a figure that was probably too high. The basis of California’s January 1994 population figure was a 1993 Census Bureau estimate of 2.1 million illegal aliens in California; the state assumed this population would grow by 100,000 each year. This assumption was based on the Census Bureau estimate that the illegal alien population is growing nationally by 200,000 each year and that about 50 percent of illegal aliens live in California. However, researchers at the Census Bureau and INS have recently estimated that the percentage living in California may be lower, ranging from about 38 to 45 percent. Moreover, INS estimates that the size of the illegal alien population is smaller, but growing more rapidly. California’s September 1994 study and the Fiscal Impacts study both relied on an indirect method to estimate the percentage of the illegal alien population that is of school age and the percentage of school-aged illegal aliens enrolled in school. The method involves constructing a proxy population based on INS estimates of the breakdown of the illegal alien population by country of origin. The proxy population consists of people who entered the United States from countries that contribute most of the illegal alien population. The education cost estimates in the California and Fiscal Impacts studies are based on 1990 Census data on the age distribution and school enrollment of the studies’ proxy populations. However, the studies differed in their assumptions about the appropriate age range to include—the Fiscal Impacts study included illegal aliens aged 5 to 19, while California included those aged 5 to 17 in its estimate. This difference resulted in the Fiscal Impacts study estimating a higher percentage of school-aged illegal aliens, but a lower percentage enrolled in school, to adjust for the likelihood that fewer 18- and 19-year-olds attend high school (see table 2). For the adjusted estimate, we used the Fiscal Impacts study’s assumptions for these two components of the cost estimate because data indicate some 18- and 19-year-olds do attend high school. California’s September 1994 estimate included a component that adjusted its enrollment estimate, which was based on fall enrollment, for the percentage of school days actually attended (“average daily attendance”). This adjustment was necessary because California’s average cost per pupil is based on average daily attendance, not fall enrollment. This adjustment was not needed in the Fiscal Impacts study because its estimate of per pupil cost was based on fall enrollment. Our adjusted estimate used California’s figure for the percentage of school days actually attended (98.2) because it also used California’s figure for average cost per pupil with some adjustments (as explained in the following paragraphs). The per pupil cost figure California included in its September 1994 estimate was considerably higher than that used in its January 1994 estimate—$4,977 compared with $4,217—even though both estimates were for fiscal year 1994-95. Both figures were derived from a statewide average based on state and local public school expenditures. However, state officials told us that their September estimate included additional funding sources that are used to pay education costs, as well as some additional costs (for example, debt service costs on bonds for school facilities and certain administrative costs). The Fiscal Impacts study, in contrast, used state-specific data on current expenditures from the National Center for Education Statistics (NCES). The study used these data to develop standardized cost estimates for the seven states included in the study. However, while the NCES data are one possible source of education cost data, there is no agreed-upon standard on the expenditures that should be included in calculating per pupil costs, according to the authors of the Fiscal Impacts study and budget and education experts we spoke with. Using the NCES data produced a lower estimate of California’s per pupil costs ($4,199) because the data do not include the range of funding sources used in the state’s cost estimate, nor do they include capital costs such as debt service on bonds. For the adjusted estimate, we used California’s September 1994 per pupil cost figure but subtracted two questionable cost items to yield an adjusted figure of $4,830. The state had included $78 per pupil for adult education costs; state officials acknowledged that this amount should not have been included. In addition, we subtracted the interest portion of the debt service cost—$69 per pupil. Experts disagree about how to treat debt service in calculating per pupil expenditures; however, we identified OMB cost principles that may provide a standard for treating such capital costs. These cost principles establish standards for determining the allowable costs of federal grants, contracts, and other agreements administered by state and local governments. The OMB cost principles specify that depreciation is an allowable cost, but interest payments are not. Experts we spoke with suggested that statewide average cost data may not be the best measure of the costs of providing illegal alien children with a public education. They suggested that researchers should instead use estimates based on the costs incurred by districts where illegal aliens are believed to be most heavily concentrated, such as Los Angeles County. However, the Fiscal Impacts study reported, and state officials concurred, that the necessary data are not available. State officials said they did not believe more localized cost data would result in estimates significantly higher or lower than estimates based on the statewide average. On the basis of congressional action in 1986, illegal aliens are eligible for emergency Medicaid services only. In addition, some legal aliens are eligible for emergency services only. These include foreign students, temporary visitors, and aliens granted temporary protected status. California has estimated that it will spend $395 million for Medicaid benefits provided to illegal aliens during fiscal year 1994-95. The Fiscal Impacts study, while questioning the accuracy of California’s estimate, did not develop an alternative estimate because data were not available to do so. Instead, it developed a “benchmark” cost range for purposes of comparison. However, it is questionable whether this benchmark provides a good basis for comparison. We made no adjustments to the state of California’s Medicaid estimate because the data needed to correct for elements that lead to possible over- or understatement of costs are not currently available. The state’s estimate was based on administrative cost data for services provided to all individuals eligible for emergency Medicaid services only, not just illegal aliens. California’s estimate may thus include some legal aliens because, at the time this estimate was developed, agency officials were legally prevented from inquiring about the immigration status of people who applied for emergency Medicaid benefits. California state officials do not have data on the extent to which legal aliens may be receiving these limited benefits. California officials told us that their cost estimate does not include all the illegal aliens they are serving under the Medicaid program. They said it does not include costs for illegal aliens who (1) are tracked in other eligibility categories, such as those for pregnant women and children, or (2) provide fraudulent documents to get full Medicaid benefits. However, state officials noted that they do not have data on the costs of Medicaid services provided to these illegal aliens. The Fiscal Impacts study used Medicaid data on formerly illegal aliens who were granted legal status under IRCA as a “benchmark” against which to assess the estimates of the seven states included in the study. The legalized alien population has many of the same characteristics as the current illegal alien population and, therefore, provides a useful basis for comparison, according to this study. The estimated range that the Fiscal Impacts study used to assess California’s Medicaid estimate—$113 million to $167 million—was considerably lower than the state’s estimate for people receiving emergency services only (see table 3). Some of the difference between California’s Medicaid estimates and those in the Fiscal Impacts study may be due to California’s inclusion of certain legal aliens in its estimate. However, differences between legalized and illegal aliens’ use of Medicaid may also explain why California’s estimate was higher. For example, the Fiscal Impacts study acknowledged that illegal aliens may be more likely than legalized aliens to use emergency Medicaid services because they know their immigration status will not be questioned. In addition, California’s administrative data indicate that illegal aliens have somewhat higher average Medicaid expenditures than aliens who were granted legal status under IRCA. Furthermore, differences in demographic characteristics of the two populations suggest that they may differ in their ability to qualify for Medicaid. In sum, these considerations raise doubt about whether the Fiscal Impacts study’s benchmark cost range was based on a comparable population. California state officials’ inability to ask about immigration status has, they believe, hindered their ability to fully account for all illegal aliens receiving Medicaid. The state court injunction that prohibited officials from asking applicants for emergency Medicaid benefits about their immigration status was initially overturned by the California Court of Appeal. However, the injunction is currently in effect pending a decision from the California Supreme Court. State officials told us they believe that if the injunction is ultimately lifted, it would enable them to collect more accurate data on the number of illegal aliens receiving emergency Medicaid services. The state of California estimated that it will spend nearly $424 million in fiscal year 1994-95 to incarcerate illegal aliens in its prisons. In contrast, the Fiscal Impacts study estimated California’s adult incarceration costs for 1994 at about $368 million. The state’s estimate was higher primarily for two reasons—state officials estimated a higher illegal alien prison population and included debt service costs on bonds for prison facilities. We adjusted California’s estimate downward to $360 million based on what we believe are the more reasonable of the assumptions used to develop the estimates (see table 4). The Fiscal Impacts study’s estimate of the number of illegal aliens in California’s prisons is more reliable than the state’s because the study directly estimated the number of illegal aliens. INS officials assisted in this study by matching prison records against several INS databases to determine prisoners’ immigration status and by conducting follow-up interviews with a sample of prisoners whose status could not be determined through the INS database matches alone. These data on prisoners’ immigration status were developed specifically for the Fiscal Impacts study and were not available to the state of California as it prepared its estimate. The state’s estimate was overstated because it was based on the number of inmates with INS detainers. This category, which refers to inmates who are subject to an INS hearing and possible deportation at the completion of their prison sentences, also includes legal aliens who are deportable because of the nature of the crimes they committed. The Fiscal Impacts study concluded that the state’s estimate of California’s adult illegal alien prison population was overstated by about 10 percent. We therefore adjusted the state’s population estimate downward by 10 percent to reflect this new information. As with their education cost estimates, the state and the Fiscal Impacts study used different data sources to estimate the average cost per inmate. The Fiscal Impacts study relied on data from the 1990 Census of State Prisons and adjusted for inflation using the Consumer Price Index. The study used this data source because it provided a uniform basis for comparing the seven states’ estimates. However, the Census of State Prisons cost data, like the NCES education cost data the Fiscal Impacts study used, do not represent an agreed-upon standard for calculating the cost per inmate. Using the Census of State Prisons data and adjusting for inflation resulted in a higher estimate of per inmate cost than using the cost data from California’s Department of Corrections, as shown in table 4. For the adjusted estimate, we used the state’s September estimate of per inmate cost because it was based on more recent data than the Census of State Prisons. California’s revised adult incarceration cost estimate is nearly 13 percent higher than its previous estimate of about $376 million for fiscal year 1994-95 (see table 4). While the state slightly lowered its estimates of the illegal alien prison population and the per inmate cost, it added a new cost item—$51 million for debt service on bonds for prison facilities. As with the state’s education estimate, we subtracted the interest portion of this amount—$27 million—based on OMB cost principles for treating capital costs (see p. 11). As with the cost estimates, estimating the tax revenues collected from illegal aliens is difficult because of the lack of direct data on this population. Researchers must rely on indirect estimation methods that make numerous assumptions about this population. These include assumptions about income, life styles, consumption patterns, tax compliance, and population size. Differences in assumptions about these variables can generate considerable variation in estimates of revenues from illegal aliens. The September 1994 study by the state of California and the Fiscal Impacts study each developed estimates of revenues from illegal aliens in California. However, variations in the years of the estimates and the types of revenues estimated complicate comparison of the studies. To facilitate comparison, we used the Fiscal Impacts study and another study by an Urban Institute researcher to extrapolate estimates of selected revenues not included in the Fiscal Impacts study. We found that although the extrapolated revenue estimates fell within the range estimated by California, the estimates still varied considerably. This variation reflects differences in the studies’ methodologies and assumptions. The California study based its estimates on projections from studies that estimated revenues from illegal aliens in various locations: (1) Los Angeles County, (2) California, (3) Texas, and (4) the United States. The Fiscal Impacts study used revenue estimates from a single study the researchers regarded as the best available (a study of Los Angeles County) and adjusted these estimates to project them to the state of California. The limited data available to support the assumptions of the California study and the Fiscal Impacts study precluded us from drawing a conclusion about which, if either, of these studies provides a reasonable estimate of revenues from illegal aliens in California. The January 1994 cost estimates from California did not include estimates of any revenues from illegal aliens in California; hence, they provided an incomplete picture of the fiscal impact of this population. In contrast, the September 1994 California study included an estimate of eight types of state and local revenues for fiscal year 1994-95. The study provided an estimate ranging from a low of $528 million to a high of $1.4 billion, with a median estimate of $878 million. This estimate was based on projections by the state of several studies on the fiscal impact of illegal aliens in different geographical areas. The high estimate incorporated parameters from these studies that, according to the state, most magnify the contributions of illegal aliens; the low estimate incorporated parameters that most deflate their contributions. The Fiscal Impacts study estimated that illegal aliens in California paid $732 million in 1992 in three types of taxes: state income taxes, state sales taxes, and state and local property taxes. However, the Fiscal Impacts study did not develop estimates of the five other types of state and local revenues included in the state’s study. To compare the two sets of estimates, we developed estimates of these five types of revenues using the methodology from the Fiscal Impacts study and a national study by an Urban Institute researcher. (App. I describes our methodology.) Adding our extrapolated estimate for these five types of revenues to the $732 million estimate for the three types of revenues produced a total state and local tax revenue estimate of $1.1 billion for 1992. The California study and the Fiscal Impacts study reflect differing views about the magnitude of revenues generated by illegal aliens in California. If the estimate extrapolated from Urban Institute studies were updated to fiscal year 1994-95, it would probably be at the high end of the range estimated by California. In contrast, the California study maintained that its median estimate of state revenues probably overstated revenues and should be treated as an upper bound. (In California’s study, state revenues constituted over 75 percent of total estimated revenues from state and local sources.) The September 1994 California study included an estimate for fiscal year 1994-95 of five types of federal revenues from illegal aliens in California.The study provided an estimate ranging from a low of $542 million to a high of $2 billion, with a median estimate of $1.3 billion. The Fiscal Impacts study did not estimate any federal revenues from illegal aliens in California. However, we used the study’s revenue estimation assumptions for California, along with a national study by an Urban Institute researcher, to extrapolate estimates of the five types of federal revenues estimated by California. (App. I describes our methodology.) This produced a federal revenue estimate of $1.3 billion for 1992. If this estimate were updated to fiscal year 1994-95, it would probably be between the California study’s median and high estimates. However, the California study maintained that both the high and median estimates probably overstated the amount of federal revenues generated by illegal aliens in California. As a result, there is no agreement about the magnitude of federal revenues generated by this population. California’s September 1994 study estimated not only individual costs and revenues but also the state’s net cost (costs minus revenues) for illegal aliens. In contrast, the Fiscal Impacts study did not estimate net costs for illegal aliens in California because it examined only selected costs and revenues. We identified one other study that attempted to provide a comprehensive accounting of the costs and revenues for illegal aliens in California. This study, by Donald Huddle, included an estimate of the net cost for this population in 1992. However, for several reasons, we were unable to draw any conclusion about California’s net cost for illegal aliens. In the case of the California study, we were unable to assess the reasonableness of its net cost estimate because data limitations precluded us from assessing California’s revenue estimates. With regard to the study by Huddle, we could not extract an estimate of the net cost to the state of California because the study’s cost estimates did not provide a breakdown of federal, state, and local costs. Consequently, we were unable to compare the study’s estimates with those in California’s study. Recognizing the problems associated with estimating the fiscal impact of illegal aliens, OMB and the Department of Justice requested the Fiscal Impacts study to help the federal government assess states’ requests for reimbursement of illegal alien costs. The study represents an initial effort to standardize and improve states’ methodologies for estimating selected costs and revenues. However, because the study was released recently, it is too early to know whether, and to what extent, California and the other six states in the study will agree with and accept the study’s efforts to standardize and improve the states’ methodologies. OMB officials have not yet indicated how they will use the study in assessing states’ requests for federal reimbursement of illegal alien costs. One other federal effort is under way to improve estimates of illegal aliens’ fiscal impact. The U.S. Commission on Immigration Reform is engaged in a long-term project that includes an effort to develop better estimates of the fiscal impact of legal and illegal aliens. This bipartisan congressional commission, created by the Immigration Act of 1990, is working on a report to the Congress on a wide range of immigration issues. The final report is due in 1997; the Commission provided an interim report to the Congress in September 1994. As part of its study, the Commission has convened a task force of independent experts to review some of the estimates of aliens’ fiscal impact and develop a better understanding of how to measure this impact. Our review of estimates of the fiscal impact of illegal aliens shows that the credibility of such estimates is likely to be a persistent issue, given the limited data available on this population and differences in key assumptions and methodologies used to develop the estimates. For example, the studies we examined differed in their treatment of capital costs, the age groups they used to estimate education costs, and their methodologies for estimating revenues. While it probably will be difficult to obtain better data on the illegal alien population, greater agreement about appropriate assumptions and methodologies could help narrow the range of estimated costs and revenues. We believe state and federal officials need to reach consensus on the approaches that should be used in developing estimates of illegal aliens’ net fiscal impact. This consensus would not necessarily produce estimates that are completely accurate, but at least it would produce estimates viewed as reasonable, given the limited data available. Instead of being confronted with an array of competing estimates, lawmakers would have information that would be more useful in assessing illegal aliens’ fiscal impact. We obtained written comments on a draft of this report from California state officials and the Urban Institute researchers who authored the Fiscal Impacts study. While California officials found no factual errors in the report, they argued that the report overstates data problems associated with estimates of costs for illegal aliens. They also maintained that the different studies’ cost estimates were essentially identical. However, we found that the estimates did vary; moreover, most were based on indirect methods whose reliability is unknown. As noted in this report, we identified a number of problems with the cost estimates for education, Medicaid, and incarceration. California officials also provided comments on the Medicaid section that we incorporated where appropriate. (See app. III.) Urban Institute researchers agreed with our assessment of the different estimates and their relative strengths and weaknesses. The researchers also provided technical comments that we incorporated where appropriate. (See app. IV.) As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days from the date of this letter. At that time, we will send copies to interested parties and make copies available to others upon request. If you or your staff have any questions concerning this report, please call me on (202) 512-7215. Other GAO contacts and staff acknowledgments are listed in appendix V. This appendix describes the methodology we used to extrapolate estimates of selected tax revenues from illegal aliens in California from two studies by Urban Institute researchers. The most recent, Fiscal Impacts of Undocumented Aliens: Selected Estimates for Seven States (the Fiscal Impacts study), estimated three types of state and local revenues from illegal aliens in California and other states (state income tax, state sales tax, and state and local property tax) for 1992. An earlier study, Immigrants and Taxes: A Reappraisal of Huddle’s “The Cost of Immigrants” (the Immigrants and Taxes study), estimated 13 types of federal, state, and local revenues from illegal aliens in the United States for 1992. We used these studies to develop estimates of five types of state and local revenues (state excise tax, state lottery revenue, local sales tax, state vehicle license and registration fees, and state gasoline tax) and five types of federal revenues (income tax, excise tax, Federal Insurance Contributions Act [FICA] tax, unemployment insurance tax, and gasoline tax) from illegal aliens in California in 1992. The first section summarizes the methodology used by the two studies to estimate revenues from illegal aliens. The second section describes how we used this methodology to extrapolate estimates of state and local revenues. The third section describes how we extrapolated estimates of federal revenues. Both studies by Urban Institute researchers employed a methodology called “ratio generalization,” which takes detailed revenue estimates for illegal aliens in one locality and generalizes them to other areas. The studies used estimates of taxes paid per capita and per household by illegal aliens in Los Angeles County in 1992. They used two factors to adjust for differences between Los Angeles County and the geographic areas they were concerned with (California in the Fiscal Impacts study and the United States in the Immigrants and Taxes study). For each of the five types of state and local revenues we estimated, we began with the estimate of the per capita tax payment by illegal aliens in Los Angeles County in 1992. We then took the values used in the Fiscal Impacts study for ratios 1 and 3, as well as the size of California’s illegal alien population. We used several sources to obtain values for ratio 2, the ratio of per capita tax payments for legal residents in California to Los Angeles County. We took the values cited in the Immigrants and Taxes study for the per capita tax payments for legal residents in Los Angeles County. To estimate per capita tax payments for legal residents in California, we used Census Bureau data on revenue collected from California residents for each of the five types of revenues and divided these amounts by the size of California’s population. For each of the five types of federal revenues we estimated, we began with the estimate of the per capita tax payment by illegal aliens in Los Angeles County in 1992. We then took the values used in the Fiscal Impacts study for ratios 1 and 3, as well as the size of California’s illegal alien population. In estimating ratio 2, the ratio of per capita tax payments for legal residents in California to Los Angeles County, we were able to obtain data on per capita taxes by state for only one of the five types of federal revenue—income tax. We used Census Bureau data on per capita federal income tax collected from California residents to estimate per capita income tax payments for legal residents in California. For our estimates of California per capita payments for the other four types of federal revenues, we used the United States average per capita tax payment figures cited in the Immigrants and Taxes study. As before, we took the values cited in the Immigrants and Taxes study for the per capita tax payments for legal residents in Los Angeles County. George J. Borjas, Professor of Economics, University of California, San Diego Rebecca L. Clark, Program for Research on Immigration Policy, The Urban Institute, Washington, D.C. Richard Fry,* Division of Immigration Policy and Research, Bureau of International Labor Affairs, U.S. Department of Labor, Washington, D.C. Briant Lindsay Lowell,* Division of Immigration Policy and Research, Bureau of International Labor Affairs, U.S. Department of Labor, Washington, D.C. Demetrios Papademetriou,* Carnegie Endowment for International Peace, Washington, D.C. Jeffrey S. Passel, Program for Research on Immigration Policy, The Urban Institute, Washington, D.C. *Expert panel participant. In addition to those named above, the following individuals made important contributions to this report: Linda F. Baker, Senior Evaluator; Alicia Puente Cackley, Senior Economist; Steven R. Machlin, Senior Social Science Analyst; and Stefanie G. Weldon, Senior Attorney. Clark, Rebecca L. The Costs of Providing Public Assistance and Education to Immigrants. Washington, D.C.: The Urban Institute, May 1994 (revised Aug. 1994). Clark, Rebecca L., and others. Fiscal Impacts of Undocumented Aliens: Selected Estimates for Seven States. Washington, D.C.: The Urban Institute, Sept. 1994. “Cost Principles for State and Local Governments.” Federal Register, Vol. 46, No. 18. Jan. 28, 1981. Fernandez, Edward W., and J. Gregory Robinson. “Illustrative Ranges of the Distribution of Undocumented Immigrants by State.” Unpublished report, U.S. Bureau of the Census, 1994. Huddle, Donald. The Net Costs of Immigration to California. Washington, D.C.: Carrying Capacity Network, Nov. 4, 1993. Los Angeles County Internal Services Department. Impact of Undocumented Persons and Other Immigrants on Costs, Revenues and Services in Los Angeles County. Nov. 6, 1992. Passel, Jeffrey S. Immigrants and Taxes: A Reappraisal of Huddle’s “The Cost of Immigrants.” Washington, D.C.: The Urban Institute, Jan. 1994. Romero, Phillip J., and others. Shifting the Costs of a Failed Federal Policy: The Net Fiscal Impact of Illegal Immigrants in California. Sacramento, Calif.: California Governor’s Office of Planning and Research, and California Department of Finance, Sept. 1994. U.S. Bureau of the Census. Government Finances: 1990-91. Washington, D.C.: U.S. Government Printing Office. U.S. Bureau of the Census. State Government Finances: 1992. Washington, D.C.: U.S. Government Printing Office. U.S. Bureau of the Census. Statistical Abstract of the United States: 1994 (114th ed.). Washington, D.C.: U.S. Government Printing Office. U.S. Department of Education. Digest of Education Statistics: 1993. Office of Educational Research and Improvement, National Center for Education Statistics, NCES-93-292. Washington, D.C.: 1993. Warren, Robert. “Estimates of the Unauthorized Immigrant Population Residing in the United States, by Country of Origin and State of Residence: October 1992.” Unpublished report, U.S. Immigration and Naturalization Service, Apr. 29, 1994. Benefits for Illegal Aliens: Some Program Costs Increasing, But Total Costs Unknown (GAO/T-HRD-93-33, Sept. 29, 1993). Illegal Aliens: Despite Data Limitations, Current Methods Provide Better Population Estimates (GAO/PEMD-93-25, Aug. 5, 1993). Trauma Care Reimbursement: Poor Understanding of Losses and Coverage for Undocumented Aliens (GAO/PEMD-93-1, Oct. 15, 1992). Undocumented Aliens: Estimating the Cost of Their Uncompensated Hospital Care (GAO/PEMD-87-24BR, Sept. 16, 1987). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the fiscal impact of illegal aliens residing in California, focusing on: (1) the Governor of California's 1994 and 1995 budget estimates for elementary and secondary education, Medicaid benefits, and adult incarceration; (2) the estimates of revenues attributable to illegal aliens; and (3) federal efforts to improve estimates of the fiscal impact of illegal aliens residing in California. GAO found that: (1) there are limited data on California's illegal alien population's size, use of public services, and tax payments and a lack of consensus on the appropriate methodologies, assumptions, and data sources to use in estimating the costs and revenues for illegal aliens in California; (2) using the most reasonable assumptions, it adjusted California's revised estimates on the costs of elementary and secondary education and adult incarceration for illegal aliens; (3) while its overall adjusted cost estimate of $2.35 billion agreed with the state's revised estimate, the component estimates differed; (4) the estimates of revenues attributable to illegal aliens ranged from $500 million to $1.4 billion, but data limitations prevented it from judging the reasonableness of the revenue estimates; and (5) although the Urban Institute has attempted to standardize and improve states' methodologies for estimating illegal aliens' costs to the public, many differences still remain that will require further consensus. |
Mr. Chairman and Members of the Committee: We are pleased to be here today to discuss the implementation of the Paperwork Reduction Act of 1995. As you requested, we have reviewed selected aspects of the act’s implementation by the Office of Management and Budget (OMB) and three agencies—the Internal Revenue Service (IRS), the Environmental Protection Agency (EPA), and the Occupational Safety and Health Administration (OSHA). In your request letter, you noted that participants at last year’s White House Conference on Small Business believed these three agencies impose the most significant paperwork burdens on small businesses. We will focus on three main issues today: (1) changes in paperwork burden governmentwide and in the three selected agencies, (2) OMB’s responsibility to set goals for reducing such burden and whether agencies will achieve the burden reductions envisioned in the act, and (3) actions each of the three agencies have taken since the passage of the act. We will also discuss some measurement issues Congress needs to consider as it assesses agencies’ progress in reducing paperwork burden. First, however, a little background information is needed. The Paperwork Reduction Act of 1995 amended and recodified the Paperwork Reduction Act of 1980, as amended. The 1995 act reaffirmed the principles of the original act and gave new responsibilities to OMB and executive branch agencies. Like the original statute, the 1995 act requires agencies to justify any collection of information from the public by establishing the need and intended use of the information, estimating the burden that the collection will impose on the respondents, and showing that the collection is the least burdensome way to gather the information. at the end of the fiscal year, and agency estimates of the burden for the coming fiscal year. The 1995 act also makes several changes in federal paperwork reduction requirements. For example, it requires OIRA to set goals of at least a 10-percent burden reduction governmentwide for each of fiscal years 1996 and 1997, a 5-percent governmentwide burden reduction in each of the next 4 fiscal years, and annual agency goals that reduce burden to “the maximum practicable” extent. The act also redefines a “collection of information” to include required disclosures of information to third parties and the public, effectively overturning the Supreme Court’s 1990 Dole v. United Steelworkers of America decision. Finally, the 1995 act details new agency responsibilities for the review and control of paperwork. For example, it requires agencies to establish a 60-day public notice and comment period for each proposed collection of information before submitting the proposal to OMB for approval. OIRA uses the ICB information to assess whether agencies’ burden reduction goals are being met. OIRA classifies changes in burden-hour estimates as caused by either “program changes” or “adjustments.” Program changes are additions or reductions to existing paperwork requirements which are imposed either through new statutory requirements or an agency’s own initiative. Adjustments are changes in burden estimates caused by factors other than changes in the actual paperwork requirements, such as changes in the population responding to a requirement or agency reestimates of the burden associated with a collection of information. OIRA counts both program changes and adjustments when calculating an agency’s burden-hour baseline at the end of each fiscal year. However, OIRA does not count changes that are due to adjustments in determining whether an agency has achieved its burden reduction goal. Figure 1 shows changes in reported burden-hour estimates governmentwide and at IRS between September 30, 1980, and September 30, 1995—the day before the new act took effect. rose dramatically in 1989, and rose every year since then with the exception of 1993. In each year since fiscal year 1989, IRS’ paperwork burden has accounted for more than three-quarters of the governmentwide total. Increases or decreases in IRS’ total number of burden hours have had a dramatic effect on the governmentwide total. For example, the near tripling of the governmentwide burden-hour estimate during fiscal year 1989 was primarily because IRS changed the way it calculated its information collection burden, which increased its paperwork estimate by about 3.4 billion hours. Because the IRS paperwork burden is such a large portion of the governmentwide total, the success of any governmentwide effort to reduce burden largely depends on reducing the burden imposed by IRS. Figures 2 and 3 show the changes in the paperwork burden at EPA and OSHA, respectively, during the same 1980 to 1995 period. Statement Paperwork Reduction: Burden Reduction Goal Unlikely to Be Met Burden hours (in millions) Statement Paperwork Reduction: Burden Reduction Goal Unlikely to Be Met Burden hours (in millions) EPA’s burden-hour estimate rose sharply in the late 1980s, fell somewhat in 1991 (because third-party information collections were no longer being counted as a result of the Dole decision), and rose again between 1991 and 1995. OSHA’s burden-hour estimate increased gradually through 1987, rose rapidly in 1988, fell back to its previous level by 1990, and decreased slightly until it rose sharply between 1994 and 1995. Figure 4 shows the month-by-month changes in the governmentwide paperwork burden between September 30, 1994, and March 30, 1996—the period including the date the 1995 act was signed by the President (May 22, 1995) and its effective date (October 1, 1995). 6.90 billion hours on September 30, 1995. IRS increased its burden-hour estimate by more than 147.6 million burden hours (about 3 percent) between August and September; EPA’s estimate went up more than 21 million hours (more than 25 percent) during that month. OSHA’s burden-hour estimate rose most dramatically shortly before the effective date, from about 1.5 million hours on June 30, 1995, to about 208 million hours on September 30, 1995. Documents we reviewed and officials we talked to indicated that these increases occurred during this period because agencies were trying to get proposed information collections approved before the new act took effect on October 1, 1995. Some of the proposals at OSHA and EPA were third-party and public disclosures that had previously been removed from the agencies’ estimates because of the Dole decision. Other proposals, particularly those at OSHA, were third-party and public disclosures that had been added after the Dole decision. By getting these third-party and other proposed information collections approved before the act’s effective date, agencies were able to avoid the new requirements imposed by the act, including the 60-day public notice and comment period at the agencies. OIRA approved some of these collections of information for less than 1 year so that the agencies would have to clear the collections under the new process during fiscal year 1996. However, submitting the proposals for review and approval before the act took effect also raised the burden-hour baseline against which the agencies’ paperwork reduction goals would be judged. For example, the increase in OSHA’s burden-hour baseline from about 1.5 million hours to about 208 million hours between June and September 1995 meant that OSHA had to cut more burden hours to achieve a 10 percent reduction (20.8 million hours) than it would have had to cut before the increase (about 150,000 hours). One of the key features of the Paperwork Reduction Act of 1995 is the requirement that OIRA set both governmentwide and agency-specific burden reduction goals for fiscal year 1996 and for the next 5 fiscal years. However, as of May 31, 1996, OIRA had not set any such goals. More importantly, information that the agencies submitted to OIRA indicated that the burden reduction target that the act specified for fiscal year 1996 is unlikely to be reached. OIRA staff told us that they plan to set the fiscal year 1996 burden reduction goals in a soon-to-be-published ICB. As part of the ICB development process, in September 1995, OIRA asked agencies to project what their burden-hour levels would be at the end of fiscal year 1996. Agencies submitted that information to OIRA between December 1995 and February 1996. OIRA staff said that they will establish a governmentwide burden reduction goal of 10 percent for fiscal year 1996, as the act requires. They also said that agency goals will reflect the end-of-fiscal year 1996 burden-hour estimates that the agencies provided in their ICB submissions unless changed as a result of OIRA review. According to unpublished information we obtained from OIRA and the agencies, the weighted average of the agencies’ burden reduction projections is about 1 percent. If these projections are accurate, the fiscal year 1996 goal of a 10-percent reduction in governmentwide paperwork burden that the 1995 act calls for will not be accomplished. Figure 5 shows the actual month-to-month governmentwide paperwork estimates from March 1995 to March 1996 and, according to our calculations, what the number of burden hours would have been by the end of fiscal year 1996 if the 10-percent burden reduction goal had been achieved and what the burden-hour total is expected to be on the basis of agencies’ projections. Statement Paperwork Reduction: Burden Reduction Goal Unlikely to Be Met Burden hours (in billions) that will not add up to the governmentwide goal of a 10-percent reduction in burden. “individual agency goals negotiated with OIRA may differ depending on the agency’s potential to reduce the paperwork burden such agency imposes on the public. Goals negotiated with some agencies may substantially exceed the Government-wide goal, while those negotiated with other agencies may be substantially less.” In addition to setting goals for paperwork reduction, the act requires OIRA to “keep the Congress and congressional committees fully and currently informed of the major activities under this chapter.” However, as of May 31, 1996, the OIRA Administrator had not informed Congress or congressional committees (1) about why OIRA has not established any burden reduction goals to date and (2) that agency projections OIRA received at least 3 months ago indicated that the 10 percent governmentwide paperwork reduction goal called for in the act would not be achieved. Both of these issues appear to us to be “major activities” subject to the act’s requirement that the OIRA Administrator keep Congress fully and currently informed. Information collection is one method by which agencies carry out their missions, and those missions are established by Congress through legislation. For the past several years, the ICBs have indicated that agencies’ burden-hour estimates increased because of congressionally imposed statutory requirements. For example, the fiscal year 1993 ICB noted that title IV of the Clean Air Act Amendments of 1990 established new permitting requirements for emission sources that produce nitrous oxides, resulting in a 1.8 million hour increase to EPA’s burden-hour estimate. As a result of such requirements, some agencies contend that they are limited in the amount to which they can reduce their paperwork burden. If agencies’ paperwork requirements are truly statutorily mandated, those agencies may not be able to reduce their burden-hour estimates by the amounts envisioned in the 1995 act without changes in the legislation underlying those requirements. However, neither we nor OIRA have assessed the extent to which the paperwork burden agencies impose is directly a consequence of statutory requirements and, therefore, is out of agencies’ control. Even though a statute may require an agency to take certain actions, the agency may have discretion regarding whether paperwork requirements need to be imposed and, if so, the manner or frequency with which the information is collected. For example, although several statutes require employers to provide training to employees, OSHA may have discretion to determine whether employers need to submit paperwork to demonstrate their compliance with these provisions. As a part of their ICB submissions to OIRA, EPA, IRS, and OSHA each projected what it believed its total number of burden-hours would be as of September 30, 1996. Each agency also took different steps to reduce its paperwork burden. EPA has its own effort to reduce paperwork that began before the Paperwork Reduction Act of 1995 took effect. EPA has set an internal burden-reduction target and expects to reach that target by the end of this year. Despite these efforts, EPA reported that their burden-hour reductions will be largely offset by increases in statutorily-based information collections. In March 1995, the EPA Administrator committed to reducing the agency’s January 1, 1995, estimated paperwork burden by 25 percent by June 1996. Initially, EPA estimated that its January 1995 baseline was about 81 million burden hours, so a 25-percent reduction would bring the agency’s total to about 61 million hours. In March of this year, we provided a statement for the record to the House Committee on Small Business indicating that, despite these planned reductions, EPA projected that its burden-hour total would increase to about 117 million hours by September 30, 1996—an increase of about 44 percent from EPA’s January 1995 baseline. projection for September 30, 1996, from 117 million hours to about 100 million hours. EPA officials said their projection was revised because some planned information collections would not be approved by OIRA by the end of the fiscal year and because their original estimate did not include all of the burden-hour reductions that EPA now expects to make by the end of the fiscal year. Using EPA’s most recent estimates, figure 6 shows EPA’s burden-hour baseline as of January 1, 1995, the 25-percent reduction goal that EPA expects to accomplish by December 31, 1996, and the total number of burden hours that EPA currently projects will be in place as of September 30, 1996. As you can see, despite EPA’s burden-reduction efforts during this period, EPA’s burden-hour estimate at the end of this fiscal year is expected to be about what it was at the start of those efforts. This is because, at the same time EPA has been reducing its January 1995 paperwork inventory, new burden hours have been added to that inventory. According to EPA, those additions are primarily third-party burden hours that are now being counted as a result of the Paperwork Reduction Act of 1995 and new information collections associated with the Clean Air Act Amendments of 1990 and the Residential Lead-Based Paint Hazard Reduction Act of 1992. Statement Paperwork Reduction: Burden Reduction Goal Unlikely to Be Met Burden hours (in millions) Does not include about 9 million hours of third party burden. Does not include about 5 million hours of TRI burden. 5 million hours of burden associated with the Toxic Release Inventory (TRI). Although EPA’s efforts to reduce burden hours have been almost totally offset by new information collection requirements, EPA’s attempt to reduce its paperwork burden may prevent what would otherwise be a significant increase in the agency’s paperwork burden. As of May 1996, EPA said that it had completed reductions of about 15 million hours and had identified about 8 million more hours of burden for elimination. If these figures are accurate, EPA would need to eliminate the 8 million burden hours it had identified and identify and eliminate about 2 million more hours to reach its goal of reducing its 101 million burden-hour baseline by 25 percent. Without the burden-hour reductions EPA says it has accomplished or has in progress, the agency’s paperwork burden could have increased by 25 percent by the end of the year. Although EPA’s initiative to reduce the burden it imposes is promising, its burden-reduction claims warrant continued scrutiny. As we reported in our March 1996 statement for the record to the House Small Business Committee, some of EPA’s February 1996 burden reduction estimates were overstated. For example, EPA initially claimed that a recently adopted TRI reporting option reduced the burden associated with TRI by about 1.2 million hours. However, EPA did not offset this reduction by the additional paperwork burden it created—about 800,000 hours—that would be incurred by those choosing this option. Therefore, the real burden reduction was about 400,000 hours. EPA estimated that it had reduced the burden associated with its land disposal restrictions program by 1.6 million hours, but its January 1, 1995, baseline indicated that the entire program only accounted for about 800,000 hours. to revise its January 1, 1995, baseline from which the burden-hour reductions are being taken. Reducing burden on the taxpayer is one of the primary goals in IRS’s Business Master Plan, in which the agency identifies a number of burden-reduction actions that it plans to take. In its ICB submission, IRS said that it plans to reduce its measured paperwork burden by about 50 million hours (0.9 percent) during fiscal year 1996 by simplifying forms and instructions, changing reporting thresholds, and moving eligible taxpayers to “E-Z” versions of required forms. IRS officials said they are limited in the amount to which they can reduce the agency’s paperwork burden because most of IRS’ information collections are statutorily mandated in the tax code. They said that unless changes are made to the substantive requirements in the code, IRS will not be able to substantially reduce its paperwork burden. IRS officials also said that significant portions of the agency’s efforts to reduce its burden focus on types of burden that are not covered by the Paperwork Reduction Act. For example, they said that a major part of the real paperwork burden on the taxpayer comes from responding to IRS notices, and IRS has a major initiative under way to determine which notices can be eliminated, combined, or simplified. However, they said that notices are not covered by the act because they focus on information collected from a single individual in the course of an investigation or inquiry. OSHA officials said that they assumed their agency would be responsible for reducing its burden by 10 percent during fiscal year 1996 as its share of the governmentwide goal. In its 1995 ICB submission to the Department of Labor, OSHA said that it would reduce its fiscal year 1995 paperwork burden by 8.7 million hours (about 4 percent) during fiscal year 1996 by dropping a number of certification requirements. Although OSHA has begun the process of eliminating these certification requirements, in the spring of 1996 OSHA officials told us that the process may not be completed in time to eliminate the requirements by the end of the fiscal year. After submission of its ICB, OSHA officials discovered that they could claim additional burden reductions. OSHA’s Process Safety Management of Highly Hazardous Chemicals Standard is a third-party information collection that the agency added to its burden-hour total in August 1995. At that time, OSHA officials estimated the paperwork requirements associated with the standard at 135 million burden hours. In keeping with a schedule established by the standard when it was issued in 1992, the burden imposed on employers declined in May 1996 because they were no longer required to perform certain recordkeeping functions after that date. OSHA officials said that they initially considered the decline in employer responsibilities an adjustment, which could not be counted toward the agency’s 10 percent burden reduction goal in their ICB submission. However, they said the Department of Labor paperwork clearance official told them the change should be considered a program change, and therefore should be counted as part of OSHA’s paperwork reduction effort. Consequently, OSHA reduced its 135 million burden-hour estimate by 17 million hours—8 percent of OSHA’s total fiscal year 1995 burden. As Congress exercises oversight in this area, it is important that it keep in mind several measurement issues. As noted previously, OIRA does not count any adjustments (because of reestimates or population changes) that agencies submit with their information collection requests in determining whether an agency has met its paperwork burden reduction goals. Therefore, an agency that initially submits a high estimate and later revises it downward does not get credit from OIRA for the reduction. Conversely, if an agency initially submits a low paperwork estimate and later increases the estimate, OIRA never counts the increase against the agency for goal attainment purposes. In fact, the governmentwide increase of about 1 billion burden hours between 1990 and 1995 was primarily driven by adjustments that never counted against agencies’ goals. OIRA staff told us they were not aware of any evidence that agencies were systematically underestimating the burden associated with their information collections and then revising them upward. interpreting the official burden-hour statistics. Most or all of the burden-hour increase may have actually existed since 1980 when the original Paperwork Reduction Act became effective. If this were the case, the statistics available to policymakers would seriously underestimate the burden actually imposed on the public, and figure 1 would overstate the degree to which paperwork burden actually increased since 1980. The increase in measured burden as a consequence of the inclusion of third-party and public disclosures in September 1995 was similar to the IRS reestimate; the burden already existed but had just not been previously measured. Likewise, the burden felt by the public does not diminish when an agency recalculates a lower estimate of its paperwork burden without eliminating any existing requirements. Relatedly, it is important that Congress be aware that certain elements of agencies’ information collection burden are not reflected in some burden-hour estimates. As we mentioned earlier, OIRA does not count about 5 million hours of paperwork burden associated with EPA’s TRI reporting form because the form is not submitted for OIRA approval. IRS’s burden-hour estimates do not include such information collections as notices involving errors, nonfilings, and delinquencies because they are exempted from coverage under the act. Finally, as we have said in previous reports and testimonies, users of paperwork burden-hour estimates should proceed with great caution. The degree to which such estimates reflect real burden and the factors that cause changes to the burden-hour totals are often unclear. Nevertheless, they are the best indicators of paperwork burden available, and we believe that they can be useful as long as their limitations are borne in mind. Mr. Chairman, this completes our prepared statement. We would be pleased to answer any questions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed governmentwide implementation of the Paperwork Reduction Act of 1995, and three federal agencies' actions to implement the act. GAO noted that: (1) between 1980 and 1995, reported governmentwide paperwork burden hours increased from about 1.5 billion to 6.9 billion; (2) the Internal Revenue Service (IRS) accounts for most of the federal paperwork burden; (3) IRS accounted for a three-fold increase in 1989 because it changed the way it calculated its information collection burden; (4) governmentwide burden hours increased almost 8 percent in the month before the act's effective date because agencies were trying to get proposed information collection activities approved before that date; (5) as of May 1996, the Office of Management and Budget's Office of Information and Regulatory Affairs had not set burden reduction goals or kept Congress informed about implementation progress; (6) agencies' weighted average burden reduction is likely to be 1 percent for fiscal year (FY) 1996, but the act's FY 1996 reduction goal is 10 percent; (7) agencies believe that statutory mission-related requirements limit their ability to reduce paperwork burdens; and (8) Congress should consider several measurement issues, including counting adjustments toward or against reduction goals, the difference between measured and actual paperwork burdens, and potentially incomplete agency burden estimates. |
the rulemaking record before publication of any proposed or final rule a document clearly identifying the changes made to the draft submitted to OMB’s Office of Information and Regulatory Affairs (OIRA), including and separately identifying the changes made at the suggestion or recommendation of OIRA. These requirements are intended to permit the public to understand the source of changes to proposed rules, and are very similar to requirements in section 6 of the executive order. However, whereas the bill requires that the changes be recorded in a single document in the rulemaking record, the order does not specify how agencies must identify the changes for the public. Mr. Chairman, at your and Senator Glenn’s request, we have been reviewing the implementation of these executive order provisions at four major regulatory agencies—the Departments of Housing and Urban Development (HUD) and Transportation (DOT), the Department of Labor’s Occupational Safety and Health Administration (OSHA), and the Environmental Protection Agency (EPA). Of the 129 regulatory actions that we reviewed in those agencies, fewer than 25 percent had a clear and simple document in the rulemaking docket illustrating the changes made to the rules while at OIRA for review or the changes made at OIRA’s recommendation. Where we found documentation, it was either a “redline/strikeout” version of the rule showing the changes made, a memo to the file listing the changes, or a memo certifying that there were no such changes. While some dockets for the other rules had indications of changes made during OIRA’s review and by OIRA, it was not clear that all changes had been recorded. Most commonly, however, the rulemaking dockets simply had no information on whether changes had been made to the rules. In those cases it was impossible for us to know whether changes had not been made to the rules, or whether documentation of the changes was missing. In some cases the agencies had clear documents that delineated the changes made to their rules while at or by OIRA, but those documents were not available to the public. For example, the U.S. Coast Guard (USCG) in DOT often prepared detailed summaries of these kinds of changes, but USCG officials said that these summaries were internal communications that were not available to the public. OSHA had comprehensive documentation of their interactions with OIRA, but the information was maintained in files separate from the public docket. OSHA officials said that they would make this information available to the public upon request. However, in order for individuals to request the information they must first know that the documents exist. Also, the dockets varied in the degree to which they could be used by the public to find the information required by the executive order. First, it is important to realize that the docket for a single rule can be extremely voluminous. For example, the docket for one of the rules we reviewed at DOT’s Federal Railroad Administration (FRA) contained 17 folders of material, some of which were nearly a foot thick. However, the docket for this rule and all of the others that we examined at FRA, HUD, and some other agencies had no indexes. Therefore, the public would have to review the entire docket to find any documentation of changes made at the suggestion of OIRA or changes in the draft submitted to OIRA. In contrast, the Office of Air and Radiation’s docket at EPA had a consistently structured index for all its rules, with specific sections in which information related to OIRA’s reviews could be found. At the time of our review, the Office of the Secretary of DOT was automating its dockets so that both indexes and eventually the entire rule making record could be accessed on-line. In testimony last September before this Committee, the OIRA Administrator, acknowledged that agencies had not “been scrupulously attentive” to the executive order’s requirement that they document the changes made at OIRA’s suggestion or recommendation. She also said, however, that the executive order had “created a more open and accountable review process” and that she had heard “no complaints about accountability and transparency.” We believe that these public disclosure requirements in the executive order, combined with the administration’s assertion of their effectiveness, have resulted in a public perception that changes made to a regulation while at OIRA and by OIRA are readily identifiable. However, our review indicated that this was usually not the case. Enactment of the public disclosure requirements in S. 981 would provide a statutory foundation for the public’s right to regulatory review information. In particular, we believe that the bill’s requirement that these rule changes be described in a single document would make understanding regulatory changes much easier for the public. suggestion of OIRA, and how agencies could organize their dockets to best facilitate public access and disclosure. We have also done work relevant to Subchapter III of S. 981, which requires agencies to review existing rules identified by an advisory committee representing a balanced cross section of public and private interests. The agencies must then decide whether to retain, amend, or repeal the rules it reviews. There have been several previous requirements by both Congress and previous presidents that federal agencies review their existing regulations. Most recently, section 5 of Executive Order 12866 required agencies to submit a program to OIRA by December 31, 1993, under which they would periodically review their existing significant regulations to determine whether any should be modified or eliminated. According to the order, the purpose of the review was to make the agencies’ regulatory programs more effective, less burdensome, or better aligned with the President’s priorities and the principles in the order. On June 12, 1995, the President announced that a page-by-page review of the CFR had resulted in commitments to eliminate 16,000 pages from the 140,000 page CFR and modify another 31,000 pages either through administrative or legislative means. savings, the reduction of burden,” would come from the CFR pages that were being revised. Mr. Chairman, at your request we have been further examining the administration’s page elimination and revision effort. We found that the four agencies that we reviewed (HUD, DOT, OSHA, and EPA) were adding pages to the CFR at the same time that pages were being deleted. As a result, although the four agencies reported to OMB that they eliminated 5,500 pages from the CFR during this initiative, as of April 30 of this year the agencies’ net reduction in CFR pages when page additions are taken into consideration was about 900 pages. Two of the four agencies’ CFR parts actually grew during their page elimination effort—DOT by about 300 pages and EPA by nearly 1,000 pages. The four agencies pointed out that pages are often added to the CFR because of statutory requirements or to clarify requirements placed on regulated entities, and that pages are sometimes retained at the request of those entities. Our review of 422 CFR revision efforts in the 4 agencies indicated that about 40 percent should reduce the burden felt by regulated entities, and another 15 percent should make regulations easier to find or to understand. For example, one EPA action that appeared to reduce regulatory burden involved changing the frequency with which states must submit information related to state water quality standards under section 303(d) of the Clean Water Act from every 2 years to every 5 years. Lessening the frequency with which this information must be submitted should reduce the paperwork burden imposed on the states. one OSHA action that appeared to be a minor burden reduction proposed to “eliminate the complexity, duplicative nature, and obsolescence” of certain standards and “write them in plain language.” However, about 8 percent of the actions appeared to increase the burden felt by those being regulated, and another 27 percent did not appear to affect regulatory burden at all. For example, OSHA proposed revising its general industry safety standard for training powered industrial truck operators and to add equivalent training requirements for the maritime industries. OSHA estimated that the first year cost of compliance with the proposed standard would be $34.9 million, with annual costs thereafter of $19.4 million. one DOT action that did not appear to affect regulatory burden proposed amending the Transportation Acquisition Regulations to change organizational names and renumber and rename certain sections of the CFR. We could not determine what effect about 9 percent of the actions would have on regulatory burden, either because the information available describing the actions contained elements of both burden reduction and burden increase that could be offsetting or because the information was vague. We recognize that directly measuring changes in regulatory burden is extremely difficult. However, we also believe that the administration’s chosen metric of pages in the CFR that are eliminated or revised is a poor proxy for changes in regulatory burden. Eliminating or changing hundreds of pages that are obsolete or rarely enforced can have little practical effect on regulatory burden, whereas slight changes in wording of a single sentence can have a tremendous effect. Enactment of the review requirements in S. 981 would provide a statutory basis for periodic examinations of existing rules. We believe that such examinations are a good idea in that they can determine the continued relevance of regulatory requirements and help ensure that the requirements impose as little burden as possible. Identification of rules for review by the advisory committee that would be established by the bill may lead to more substantive changes than have heretofore been made by the agencies on their own. Although both S. 981 and Executive Order 12866 require agencies to conduct cost-benefit analyses for major rules and to make the results available to the public, the bill goes farther than the order in requiring disclosure of how those analyses are conducted. For example, one of the bill’s “findings” states that cost-benefit analyses and risk assessments “should be presented with a clear statement of the analytical assumptions and uncertainties including an explanation of what is known and not known and what the implications of alternative assumptions might be.” Section 623 of the bill requires agencies to include an executive summary of the regulatory analyses, including the benefits and costs of reasonable alternatives and “the key assumptions and scientific or economic information upon which the agency relied.” In January 1996, OMB issued guidance to executive agencies on preparing the economic analyses called for in Executive Order 12866. Although the OMB guidance provided agencies with substantial flexibility in how such analyses should be conducted, the guidance sounded some of the same themes as S. 981. “Analysis of the risks, benefits, and costs associated with regulation must be guided by the principles of full disclosure and transparency. Data, models, inferences, and assumptions should be identified and evaluated explicitly, together with adequate justifications of choices made, and assessments of the effects of these choices on the analysis. The existence of plausible alternative models or assumptions, and their implications, should be identified.” Our previous work examining agencies’ cost-benefit analyses indicated that the studies are often not as transparent as either the bill or the OMB guidance contemplates. For example, in our report earlier this year on EPA’s analyses that support air quality regulations, we found that certain key economic assumptions—such as discount rates and assumed values of human life—were often not identified. Even in those cases in which the assumptions were identified, the reasons for the values used were not always explained. For example, one analysis assumed a value of life that ranged from $1.6 million to $8.5 million while another—prepared in the same year—assumed a value of life that ranged from $3 million to $12 million. In neither case did the analyses clearly explain why the values were used. We also found that about one-quarter of the analyses that we reviewed examined only one alternative—the regulatory action being considered. S. 981’s call for executive summaries in the cost-benefit analyses echoes a recommendation we made 13 years ago. In a 1984 report, we recommended that EPA’s cost-benefit analyses include executive summaries that identify (1) all benefits and costs—even those that cannot be quantified; (2) the range of uncertainties associated with the benefits and costs; and (3) a comparison of feasible alternatives. However, about one-half of the 23 EPA analyses supporting air quality regulations that we reviewed last year did not have executive summaries. Mr. Chairman, at your and Senator Glenn’s request we are currently evaluating executive agencies’ preparation and use of regulatory analyses. Although our work to date has focused primarily on EPA and DOT, we are finding some significant variations both between and within the two agencies’ analyses and their presentation of these key components. For example, the base-case discount rates used in the 11 analyses we have reviewed ranged from 2.1 to 7 percent. The reasons for the rate chosen were frequently not explained nor were the implications of using alternative rates discussed in the analyses. As a result, agency decisionmakers, Congress, and the public may not be aware that the results of these analyses could have been significantly different if different assumptions had been used. In several of the analyses we reviewed, various key components were either missing altogether, difficult to find, or located in documents other than the analyses themselves. Some of the analyses consisted of several separate documents that were never consolidated in a clear manner. For example, agency officials told us that one of the economic analyses was actually 12 separate memoranda. We are also finding that many of the analyses are actually cost-effectiveness studies rather than cost-benefit analyses. Cost-effectiveness analyses generally look for ways to meet a given goal at the least cost, while cost-benefit analyses usually involve a systematic identification of all costs and benefits associated with the proposed regulation and alternative approaches. Although cost-effectiveness analyses permit comparison of the costs of regulatory options relative to a given objective, these analyses generally do not address the merits of the objective itself. Agency officials explained that they often prepare cost-effectiveness analyses in cases where Congress has mandated the development of specific regulations—such as new emission standards for locomotives. According to the officials, in such cases it makes more sense to look for the most cost-effective approach to achieve that result rather than assessing all of the benefits and costs of alternative approaches. ozone and particulate matter standards. According to the agency, the more systematic cost-benefit analyses will aid EPA and the states when the standards are implemented—at which time costs can be considered. In addition, the more systematic analyses provide important information to the Congress and the public on the likely costs and benefits of mandates where the agencies are limited in their regulatory decisions. Our findings are similar to those of others who have recently examined cost-benefit studies. In its March 1997 report on economic analyses, the Congressional Budget Office concluded that there is no such thing as a typical analysis, and that even determining what constitutes an economic analysis is difficult. In its July 1997 draft report on governmentwide costs and benefits, OMB said that it found “a wide variety in the type, form, and format” of the information generated and used by the agencies, including “enormous data gaps in the information available on regulatory benefits and costs,” problems with establishing baselines, and a lack of consensus on how to value items or qualities not generally traded in the marketplace. OMB concluded that “we need to ensure that the quality of data and analysis used by the agencies improves, that standardized assumptions and methodologies are applied more uniformly across regulatory programs and agencies...” A diverse panel of renowned economists made a similar recommendation in a 1996 paper prepared under the auspices of the American Enterprise Institute, the Annapolis Center, and Resources for the Future. Among other things, the panel recommended that agencies present their results using a standard format that summarizes the key results and highlights major uncertainties. Enactment of the analytical transparency and executive summary requirements in S. 981 would extend and underscore Congress’ previous statutory requirements that agencies identify how regulatory decisions are made. We believe that Congress and the public have a right to know what alternatives the agencies considered and what assumptions they made in deciding how to regulate. Although those assumptions may legitimately vary from one analysis to another, the agencies should explain those variations. S. 981 also requires agencies to provide for peer review of all required cost-benefit analyses and risk assessments. Peer review is the critical evaluation of scientific technical work products by independent experts. The bill states that the peer review panels should be “broadly representative and balanced,” and that the results of the reviews should be made available to the public. We believe that important economic analyses should be peer reviewed. In response to questions raised at a March 1997 hearing on peer review at EPA, we said that, given the uncertainties associated with predicting the future economic impacts of various regulatory alternatives, the rigorous, independent review of economic analyses should help enhance the products’—and the associated agency decisions’—quality, credibility, and acceptability. However, in our 1996 review of peer review at EPA, whose own policies and procedures call for such reviews, we concluded that implementation of these policies and procedures had been uneven. In some cases important aspects of the agency’s peer review policy were not followed or peer review was not conducted at all. Our current work examining regulatory analyses at executive branch agencies is yielding similar evidence. None of the nine EPA analyses that we have reviewed thus far have been peer reviewed, even though all of the associated rules have an estimated annual impact on the economy of at least $100 million. this could be done more often if economic analyses were initiated at the beginning of the rulemaking process. The peer review requirements in S. 981 provide agencies with substantial flexibility. Agency heads may certify that adequate peer review has already been conducted, and avoid the bill’s requirements. However, agencies will need to carefully plan for such reviews given the bill’s requirement that they be done for each cost-benefit analysis and risk assessment, which must be done at both proposed and final rulemaking. Agencies will also need to ensure that all affected parties are represented on the panels and that panel reports reflect the diversity of opinions that exist. Mr. Chairman, our work has demonstrated that, although there is broad consensus about the value of conducting peer reviews of cost-benefit analyses used in the regulatory process, such reviews are often not done. In our view, systematic peer review as mandated by S. 981 would go a long way toward improving the quality of agencies’ cost-benefit analyses. S. 981 contains a number of provisions to improve regulatory management. Requiring agencies to clearly describe in a single document changes made at the suggestion of OIRA or while under OIRA review can improve the transparency of the review process. Establishing advisory committees to identify rules for review could result in the elimination or revision of burdensome requirements. Improving the transparency and understandability of cost-benefit analyses by using executive summaries and other devices will help the public comprehend why regulatory decisions are made. Peer reviews of those analyses can help ensure that regulatory proposals are scientifically grounded. Although other provisions in the bill, such as comparative risk assessment and interagency coordination, may have similarly beneficial results, we have not done specific work in those areas. Passage of S. 981 would provide a statutory foundation for such principles as openness, accountability, and sound science in rulemaking. The key to achieving those principles is successful implementation, which will require strong guidance from OIRA and oversight from this and other Committees in Congress. Enactment of S. 981 would provide a sound basis for that oversight. Mr. Chairman, this completes my prepared statement. I would be pleased to answer any questions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO addressed issues in regulatory management as part of consideration of S. 981, the proposed Regulatory Improvement Act of 1997. GAO noted that: (1) S. 981 represents a continuation of efforts that have been made by both the legislative and executive branches to improve the rulemaking process and, as a result, produce better regulations; (2) during the past 20 years, Congress has enacted a series of statutory requirements intended to, among other things, reduce paperwork, lessen regulatory burden on small entities, and curb mandates imposed on state, local, and tribal governments and the private sector; (3) in the same vein, each of the last six presidents has issued executive orders or taken other actions intended to improve the regulatory process; (4) Executive Order 12866, issued in September 1993, is the Clinton administration's statement of policy on regulatory planning and review; (5) the executive order makes the Office of Management and Budget (OMB) responsible for carrying out regulatory reviews and, to the extent permitted by law, for providing guidance to agencies; (6) S. 981 addresses many of the same issues as Executive Order 12866, including cost benefit analysis, agency reviews of existing regulations, interagency coordination, and transparency in the regulatory review process; (7) the bill goes beyond the order's requirements on these issues and adds some new elements to the rulemaking process; (8) GAO's work indicates that some of the executive order's requirements have not always been met; and (9) enactment of S. 981 would help ensure that the underlying purposes of the order's requirements are more consistently achieved by OMB and regulatory agencies and provide a sound basis for congressional oversight of regulatory management issues. |
The Coast Guard has updated policies and processes for major acquisition programs to better reflect best practices and respond to our prior recommendations. The Coast Guard also continues to make progress in reducing its acquisition workforce vacancies, and to some extent is leveraging DOD contracts and expertise to support its major acquisition programs. Some examples are below. Updates to Policies and Processes We found that the Coast Guard revised its Major Systems Acquisition Manual in November 2010 to include a description of the roles and responsibilities of a flag-level Executive Oversight Council, which was formed in 2009 to review programs and provide oversight; aligning roles and responsibilities of independent test authorities to DHS standards, which satisfied one of our prior recommendations; a formal acquisition decision event before a program receives approval for low-rate initial production, which addressed one of our prior recommendations; and a requirement to present an acquisition strategy when DHS is asked to validate the need for a major acquisition program. The Coast Guard has made progress in reducing its acquisition workforce vacancies. From April through November 2010, the percentage of vacancies for government positions dropped from about 20 percent to13 percent. Over the past several years, we have reported on the Coast Guard’s efforts to build its in-house acquisition workforce capacity—one of the reasons the Coast Guard initially turned to a contractor as the Deepwater systems integrator was largely because it did not have that in- house capacity. Acquisition workforce vacancies have decreased, but program managers have ongoing concerns about staffing program offices. For example, the HH-65 helicopter program office has funded and filled 10 out of the 33 positions needed. To help make up shortfalls in filling systems engineer and other acquisition workforce positions, the Coast Guard uses support contractors. As of November 2010, the Coast Guard support contractors made up 25 percent of the Coast Guard’s acquisition workforce. While we have cited the risks in using support contractors, we previously reported that the Coast Guard has acknowledged these risks and has taken steps to address them, such as releasing guidance on the appropriate oversight of contractors and the work they perform. According to the Coast Guard, it currently has 81 interagency agreements, memorandums of agreement, and other arrangements in place primarily with DOD to support its major acquisition programs. Support from DOD ranges from acquiring products and services from established DOD contracts to using the Navy’s engineering and testing expertise. For example, the Coast Guard benefited from discounts by coordinating C- 130J aircraft contracting efforts through the Air Force acquisitions office rather than contracting directly with the aircraft manufacturer. To leverage Navy engineering and testing expertise, most Coast Guard major acquisition programs use the Navy’s Commander, Operational Test and Evaluation Forces, to support test activities. Coast Guard program managers, however, do not have a systematic way to gain insight into the existence and details of such agreements. According to Coast Guard contracting officials, the Coast Guard recently began to develop a database of all interagency agreements with DOD and other agencies, but at this point program staff have access to only 5 of the approximately 81 agreements. Today’s report contains a recommendation that the Commandant of the Coast Guard take steps to ensure that all interagency agreements are captured in a database or other format and to make this information readily accessible to program staff. DHS agreed with the recommendation. We have previously reported that the Coast Guard has gained insights into the risks it faces in managing its major acquisitions. At the same time, most major programs continue to experience challenges in program execution, resources, and schedule. The Coast Guard assesses program execution using a composite metric that includes the following factors: earned value management, a performance assessment, logistics assessment, testing status, risk assessment, and technical maturity. It also assesses resources using a composite metric that includes several factors, such as budgeting, funding, staffing, and contractor health, that is, contractor personnel and facilities. These challenges are exacerbated by the Coast Guard’s budget planning, which includes developing capital investment plans that project outyear funding levels. The Coast Guard has reported that projected funding levels in the fiscal years 2011-2015 capital investment plan were lower than previously planned for some major acquisition programs. This plan includes Deepwater Program assets as well as other acquisitions. Figure 1 illustrates these risks for each major acquisition program. Programs experiencing instability due to reduced projected funding levels. To support its role as systems integrator, the Coast Guard planned to complete a fleet mix analysis in July 2009 to eliminate uncertainty surrounding future mission performance and to produce a baseline for the Deepwater acquisition. We previously reported that the Coast Guard expected this analysis to serve as one tool, among many, in making future capability requirements determinations, including future fleet mix decisions. The analysis, which began in October 2008 and is now termed fleet mix analysis phase 1, was led by the Coast Guard directorate responsible for identifying and providing capabilities. In July 2010, we reported that while the Coast Guard had not yet released the results, officials told us that the analysis considered the 2007 Deepwater baseline to be the “floor” for asset capabilities and quantities and did not impose financial constraints on the outcome. The Coast Guard initiated a second phase of the analysis to impose cost constraints. We recommended in our July 2010 report that since the 2007 DHS-approved baseline of $24.2 billion was no longer feasible because of cost growth, the Coast Guard should conduct a comprehensive review of Deepwater cost, schedule, quantities, and mix of assets needed to meet mission needs, identify trade-offs given fiscal constraints, and report the results to Congress. The Coast Guard’s efforts to date have not addressed this recommendation. We recently obtained and analyzed the phase 1 fleet mix analysis. We found that to conduct this analysis, the Coast Guard assessed asset capabilities and mission demands to identify a fleet mix—referred to as the objective fleet mix—that would meet long-term strategic goals. Given the significant increase in the number of assets needed for this objective fleet mix from the approved Deepwater program of record—the $24.2 billion baseline—the Coast Guard developed, based on risk metrics, incremental fleet mixes to bridge the two. Table 1 shows the quantities of assets for each incremental mix, according to the Coast Guard’s analysis. Phase 1 also analyzed the performance of these fleet mixes to gain insight into mission performance gaps. However, the analysis was not cost constrained, as noted above. For instance, the Coast Guard estimated that the costs associated with the objective fleet mix could be as much as $65 billion. This is approximately $40 billion higher than the DHS-approved $24.2 billion baseline. As a result, as we reported last year, Coast Guard officials stated that they do not consider the results to be feasible because of cost and do not plan to use them to provide recommendations on a baseline for fleet mix decisions. In May 2010, the Coast Guard undertook phase 2, a cost-constrained fleet mix analysis. Officials responsible for the analysis explained that it will primarily assess the rate at which the Coast Guard could acquire the Deepwater program of record within a high and low bound of annual acquisition cost constraints. They told us that the lower- and upper- bound constraints are, respectively, $1.2 billion and $1.7 billion annually; however, the basis for selecting these cost constraints is not documented. Based on our review of recent budget data, this upper bound for Deepwater is more than Congress has appropriated for the Coast Guard’s entire acquisition portfolio in recent years. Moreover, the Coast Guard officials stated that this analysis will not reassess whether the current program of record is the appropriate mix of assets to pursue and will not assess any mixes smaller than the current program of record. Alternative fleet mixes will be assessed, but these mixes are based on purchasing additional assets after the program of record is acquired, if funding remains within the yearly cost constraints. Coast Guard officials stated that they are only analyzing the program of record or a larger fleet mix because they found that the first phase of the analysis validated pursuing, at the minimum, the program of record. The Coast Guard expects to complete its phase 2 analysis in the summer of 2011. Because fleet mix analysis phase 2 will not assess options lower than the program of record, it will not prepare the Coast Guard to make the trade-offs that will likely be needed in the current fiscal climate. Furthermore, it is our understanding that DHS is conducting a study examining the mix of surface assets, which is expected to be completed later this year. As part of our ongoing work, we will continue to monitor these efforts as they relate to the fleet mix analysis. In conclusion, I would like to emphasize several key points as we continue to review the Coast Guard’s management of acquisitions. It is important to recognize that the Coast Guard continues to make progress in strengthening its capabilities to manage its acquisition portfolio by updating acquisition policies and practices, reducing vacancies in the acquisition workforce, and leveraging DOD contracts and resources to help support its major acquisitions. Nevertheless, the Coast Guard still faces significant challenges in carrying out these major acquisitions within a fiscally constrained environment, especially given continued cost growth and schedule delays that are exacerbated in part by unrealistic budget plans. Additionally, as costs continue to grow and capabilities are delayed, the Coast Guard has yet to consider the trade-offs in capabilities, quantities, and costs of the Deepwater assets—a significant portion of its major acquisition portfolio—in order to identify an affordable fleet. We expect to continue reviewing and reporting on its progress in this regard. Chairman LoBiondo, Ranking Member Larsen, this concludes my prepared statement. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. If you have any questions on matters discussed in this statement, please contact John P. Hutton at (202) 512-4841 or huttonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals making key contributions to this testimony include Michele Mackin, Assistant Director; John Neumann, Assistant Director; Jessica Drucker; Laurier Fish; Carlos Gomez; Kristine Hassinger; Morgan Delaney Ramaker; William Russell; Molly Traci; and Rebecca Wilson. The NSC is intended to be the flagship of the Coast Guard’s fleet, with an extended on- scene presence, long transits, and forward deployment. The cutter and its aircraft and small boat assets are to operate worldwide. The OPC is intended to conduct patrols for homeland security functions, law enforcement, and search and rescue operations. It will be designed for long-distance transit, extended on-scene presence, and operations with multiple aircraft and small boats. The FRC, also referred to as the Sentinel class, is conceived as a patrol boat with high readiness, speed, adaptability, and endurance to perform a wide range of missions. The MEC sustainment project is intended to improve the cutters’ operating and cost performance by replacing obsolete, unsupportable, or maintenance-intensive equipment. The PB sustainment project is intended to improve the boats’ operating and cost performance by replacing obsolete, unsupportable, or maintenance-intensive equipment. The MPA is a transport and surveillance, fixed-wing aircraft intended to be used to perform search and rescue missions, enforce laws and treaties, and transport cargo and personnel. The HC-130J is a four-engine turbo-prop aircraft that the Coast Guard has deployed with improved interoperability, Command, Control, Communications, Computer, Intelligence, Surveillance, and Reconnaissance (C4ISR), and sensors to enhance surveillance, detection, classification, identification, and prosecution. The HC-130H is the legacy Coast Guard long-range surveillance aircraft, which the Coast Guard intends to update in multiple segments. HH-65 Multi-mission Cutter Helicopter The HH-65 Dolphin is the Coast Guard’s short-range recovery helicopter. It is being upgraded to improve its engines, sensors, navigation equipment, avionics, ability to land on the NSC, and other capabilities in multiple segments. The HH-60 is a medium-range recovery helicopter designed to perform search and rescue missions offshore in all weather conditions. The Coast Guard has planned upgrades to the helicopter’s avionics, sensors, radars, and C4ISR systems in multiple segments. The land-based and cutter-based UASs are in the Need phase. The UAS strategy is to range UASs and low altitude cutter-based tactical UASs to fulfill mission requirements while emphasizing (1) commonality with existing Department of Homeland Security and Department of Defense programs, (2) ensuring that projects mature, and (3) where possible, leveraging other government organizations’ UAS development and nonrecurring engineering costs. The RB-M is intended to replace the aging 41-foot utility boats and other medium nonstandard boats. The Coast Guard is incrementally acquiring C4ISR capabilities, including upgrades to existing cutters and shore installations, acquisitions of new capabilities, and development of a common operating picture to provide operationally relevant information and knowledge across the full range of Coast Guard operations. CG-LIMS will replace or integrate legacy logistics business processes and their supporting information systems. NAIS is a data collection, processing, and distribution system that provides information to enhance safety of navigation and improve Maritime Domain Awareness. IOC is intended to improve operational capabilities, situational awareness, tactical decision making and joint, coordinated emergency response. Rescue 21 is an advanced command, control, and communications system intended to improve the Coast Guard’s search and rescue mission by leveraging direction-finding technology to more accurately locate the source of distress calls. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The U.S. Coast Guard manages a broad major acquisition portfolio. GAO has reported extensively on the Coast Guard's significant challenges with its major acquisition programs, including its Deepwater Program. GAO has also recognized steps the Coast Guard has taken to improve acquisition management. Additionally, GAO has recommended that the Coast Guard complete a review of the Deepwater Program to clarify the mix of assets that are needed to meet mission needs and trade-offs while considering fiscal constraints, because the program had exceeded its $24.2 billion baseline. This testimony updates (1) Coast Guard efforts to manage major acquisitions, (2) challenges programs are facing in the areas of cost and schedule, and (3) the status of the Deepwater fleet mix analysis. This statement is largely based on GAO-11-480 , which is being issued today. In that report, GAO recommended that the Coast Guard formalize its database of agreements with the Department of Defense (DOD). The Department of Homeland Security agreed with the recommendation. This statement also draws from prior GAO reports and ongoing work related to Deepwater. GAO reviewed the first phase of the Coast Guard's fleet mix analysis, contract documents, and budget information. GAO also interviewed Coast Guard officials responsible for conducting the fleet mix analysis. For the new information, GAO obtained Coast Guard views and incorporated technical comments where appropriate The Coast Guard continues to improve its acquisition management capabilities by updating policies, reducing acquisition workforce vacancies, and leveraging DOD contracts. In November 2010, the Coast Guard updated its "Major Systems Acquisition Manual" to further incorporate best practices and respond to prior GAO recommendations, such as aligning the roles and responsibilities of independent test authorities to DHS standards. Additionally, the Coast Guard reduced its acquisition workforce vacancies from about 20 to 13 percent from April through November 2010. Shortfalls in hiring staff for certain key areas persist, though, and some programs continue to be affected by unfilled positions. The Coast Guard has entered into 81 memorandums of agreement and other arrangements--primarily with DOD--to support its major acquisition programs, but program staff currently have access to only 5 of the 81 agreements. Most of the Coast Guard's 17 major acquisition programs continue to experience challenges in program execution, schedule, and resources. Furthermore, the Coast Guard's unrealistic budget planning exacerbates these challenges. When programs receive funding lower than planned, schedule breaches and other problems are more likely to occur. In fact, 4 of the major acquisition programs have reported a baseline breach caused, at least in part, by reduced projected funding levels. Additionally, projected funding levels in the Coast Guard's fiscal years 2012-2016 capital investment plan are significantly higher than budgets previously appropriated or requested and therefore may be unrealistic. This is particularly true given the rapidly building fiscal pressures facing the nation. For example, the Coast Guard plans to request $2.35 billion for acquisitions in fiscal year 2015--including funding for construction of three major Deepwater surface assets--but the agency has not received more than $1.54 billion in any recent year. The Coast Guard has developed action items to address budget planning challenges. In July 2010, GAO recommended that because of significant cost growth in the Deepwater Program, the Coast Guard should review the cost and mix of assets and identify trade-offs given fiscal constraints. The Department of Homeland Security agreed with the recommendation; however, the Coast Guard has not yet implemented it. The Coast Guard began a fleet mix analysis in 2008 that considered the current Deepwater Program to be the "floor" for asset capabilities and quantities and did not impose cost constraints on the various fleet mixes. Consequently, the results will not be used as a basis for trade-off decisions. The Coast Guard has now begun a second analysis, which includes an upper cost constraint of $1.7 billion annually--more than Congress has appropriated for the entire Coast Guard acquisition portfolio in recent years. Further, Coast Guard officials told GAO that this analysis will not assess options lower than the current program of record. It therefore will not prepare the Coast Guard to make the trade-offs that will likely be needed in the current fiscal climate. The Coast Guard expects to complete the analysis this summer. |
BSA, enacted by Congress in 1970, authorizes the Secretary of the Treasury to issue regulations requiring financial institutions to retain records and file reports useful in criminal, tax, and regulatory investigations. Following the September 11, 2001 terrorist attacks, Congress passed the USA PATRIOT Act, which, among other things, amended BSA and expanded the number of industries subject to BSA regulation. Title III of the act expanded BSA powers to combat terrorist financing and required financial institutions to establish proactive anti- money laundering programs. In addition, the act expanded reporting requirements and allowed the records and reports collected under BSA to be used in the conduct of intelligence or counterintelligence activities and to protect against international terrorism. The BSA framework focuses on financial institutions’ record keeping and reporting requirements to create a paper trail of financial transactions that federal agencies can trace to deter illegal activity and apprehend criminals. Under the BSA framework, primary responsibility rests with the financial institutions themselves in gathering information and passing it to federal officials. “Financial institutions” include both banking institutions and NBFIs. Banking institutions include commercial banks and trusts, savings and thrifts, branches of foreign chartered banks doing business in the United States, and credit unions. NBFIs include MSBs, casinos, and some credit unions. MSBs include businesses that transmit money, cash checks, and engage in certain financial transactions. MSBs are the largest and most diverse group of entities that qualify as NBFIs. Table 1 describes the different types of entities that qualify as NBFIs not otherwise regulated by a federal functional regulator. All financial institutions subject to BSA requirements must implement internal controls, policies, and procedures; maintain records of transactions; and file reports of cash transactions over the $10,000 dollar threshold and suspicious activities. The USA PATRIOT Act required all financial institutions to develop written anti-money laundering compliance programs that detail internal policies, procedures and internal controls. Each program must designate a compliance officer, provide ongoing employee training of pertinent personnel, and provide for independent reviews whose scope and frequency is commensurate with the risk of the financial services provided. Registration, record keeping, and reporting are the core elements of anti- money laundering requirements for MSBs. Certain MSBs are required to register with the Secretary of the Treasury and renew those registrations every 2 years. In addition, MSBs that sell money orders, travelers’ checks, or other instruments for cash must verify the identity of each customer and create and maintain a record of each purchase when the purchase is cash from $3,000 to $10,000. Also, financial institutions and certain types of businesses are required to submit reports on cash transactions over the $10,000 threshold and transactions of a suspicious nature. Millions of these reports are filed each year. For example, in 2005 over 16 million BSA reports were filed by financial institutions. Certain civil and criminal penalties can be levied against financial institutions for violating BSA reporting requirements, with fines ranging from $500 for negligence to $500,000, 10 years in jail, or both for certain willful violations. , Appendix III discusses the compliance reporting responsibilities in more detail. FinCEN’s role is to oversee administration of BSA government wide. In this role, FinCEN develops policy and provides guidance to other agencies, as shown in figure 1. However, FinCEN also relies on other agencies in implementing the BSA framework, including (1) ensuring compliance with BSA requirements to report certain financial transactions, (2) conducting investigations of criminal financial activity, and (3) collecting and storing the reported information IRS is involved in all three of these areas. As administrator of BSA, FinCEN’s compliance role is to develop regulatory policies for agencies that examine financial institutions and businesses for compliance with BSA laws, and when appropriate, assess civil penalties against noncompliant institutions. FinCEN develops and issues BSA regulatory requirements and provides guidance to financial institutions that are subject to those requirements. FinCEN is also responsible for overseeing agency compliance examination activities and provides these agencies with assistance in educating institutions on their BSA responsibilities. As highlighted in the compliance examiners section of figure 1, IRS is one of eight agencies that actually conduct the compliance examinations that FinCEN oversees. The Office of Fraud/BSA, within SB/SE, conducts examinations of NBFIs, including MSBs, which are not regulated by another federal agency. Appendix III discusses the compliance responsibilities of MSBs in more detail. FinCEN is responsible for supporting and networking law enforcement at the federal, state, and local levels. FinCEN’s network exceeds 180 law enforcement agencies, and includes CI, the Federal Bureau of Investigation, the Drug Enforcement Administration, Immigration and Customs Enforcement, state and local police departments and investigative bureaus, attorney general and district attorney offices, and foreign authorities. FinCEN provides investigative leads to support financial criminal investigations and offers a variety of analytical products on trends and patterns that can be used by law enforcement to more effectively target their investigations. As the enforcement arm of IRS, CI has the authority to investigate criminal violations of BSA laws. Like other law enforcement agencies, CI uses financial intelligence, including data provided on BSA reports, to build investigations and prepare cases for prosecution. The law enforcement section of figure 1 highlights how FinCEN, IRS CI, and the broader law enforcement community fit into the BSA framework. FinCEN has responsibility for overseeing the management of BSA data, but from an operational standpoint does not collect, store, or maintain the official data that are reported by financial institutions. IRS’s Enterprise Computing Center at Detroit (ECC-DET), under a long-standing cooperative arrangement with FinCEN, has been the central point of collection and storage of these data. ECC-DET maintains the infrastructure needed to collect the reports, convert paper and magnetic tape submissions to electronic media, and correct errors in submitted forms through correspondence with filers. As illustrated in the data management section of figure 1, BSA data are processed and warehoused in IRS’s Currency Banking and Retrieval System are accessed through a Web-based interface. The system is called WebCBRS. IRS examiners and investigations officials access WebCBRS directly through IRS’s intranet. Non-IRS law enforcement users access BSA data through FinCEN’s Gateway computer system. Secure Outreach functions as a portal through FinCEN’s information technology infrastructure to BSA data housed at ECC-DET. Despite many improvements, IRS does not yet have an effective BSA compliance program. An effective IRS compliance program would require identifying the population of NBFIs and then periodically testing whether these NBFIs are complying with their reporting and other BSA requirements. Several efforts have been made to estimate the NBFI population, but all of these estimates have weaknesses. However, IRS and other knowledgeable observers agree that IRS has only identified a portion of the population. No recent studies have been conducted that estimate the total population of NBFIs; however, a number of efforts have been made to estimate the number of MSBs, the largest group of NBFIs subject to BSA requirements. A 1997 study conducted by a FinCEN consultant estimated the existence of approximately 158,000 MSBs. One IRS official within the Office of Fraud/BSA estimates there may be approximately 160,000 MSBs. In 2005, another FinCEN study estimated the population to be as high as 200,000. Officials from FinCEN, IRS, Treasury, TIGTA, and Treasury’s OIG agree that IRS has only identified part of the NBFI population. Several factors contribute to IRS’s difficulty in identifying NBFIs. NBFIs, especially MSBs, are inherently difficult to identify because of the wide range of sizes, structures, and financial activities they conduct. Unlike traditional financial institutions, such as federally insured banks, many MSBs are small, independently owned businesses in which financial services are offered as a secondary business activity. For example, many grocery stores, convenience stores, gas stations, and liquor stores would be considered MSBs because they offer check cashing, money order, or wire transfer services, even though the primary activity of these businesses is the sale of consumer goods. In a 2005 report, the OIG cited language barriers and the limited financial proficiency of some business owners as reasons many MSBs are not registered, and therefore have not been identified. The OIG also found that regulations and guidance for MSBs can be confusing and easily misinterpreted, thus contributing to the challenge of identifying MSBs. The report states that the distinction FinCEN makes between a MSB principal and an agent of that principal is not always understood by the MSB population and is difficult to verify other than through an on-site examination. Some BSA rules, such as the registration requirement, are applicable to principals—the entities issuing financial instruments—and some are applicable to agents—businesses authorized to sell the issuers’ financial instruments. Another confusing aspect of the MSB requirements is that businesses whose daily money services transactions are less than $1,000 per day per person are generally not considered MSBs. As with the agent exemption, the dollar threshold is difficult to verify other than through an on-site examination. The OIG found that FinCEN had plans to assess whether agents of MSBs should be required to register; however, FinCEN has not taken action to implement these plans. IRS officials in the Office of Fraud/BSA support a change that requires all MSBs to register, regardless of whether they are principals or agents, because it would make identification easier. FinCEN officials, however, said that their first priority is to ensure that the current list of MSB registrations is accurate. Therefore, FinCEN does not have a time- frame for revising MSB regulations and guidance, including registration requirements. Identifying NBFIs, and particularly MSBs, is challenging and resource intensive—both FinCEN and IRS have responsibility in this area. IRS uses CBRS, public and commercial databases, Internet searches, and the yellow pages to identify potential MSBs. FinCEN searches past BSA reports and gets referrals from other law enforcement officials about potential NBFIs and MSBs. However, not all businesses identified from these sources as potential NBFIs are actually subject to BSA requirements. IRS has identified 107,000 potential NBFIs, but has not been able to determine how many of these businesses are subject to BSA. Whenever IRS identifies a new business it believes may be an NBFI, it sends the business a letter. This letter explains that IRS believes the business is engaged in an activity that qualifies it as an NBFI subject to BSA requirements. IRS officials said they are uncertain about the effectiveness of this letter and that some businesses do not reply. Further, these officials said often the only way to confirm whether a business is subject to BSA requirements is to conduct an on-site examination, a labor-intensive and time-consuming process. IRS officials in the Office of Fraud/BSA told us that accessing IRS’s tax return databases might help identify additional potential NBFIs. The Office of Fraud/BSA is currently unable to use tax return information to identify businesses that may be subject to BSA requirements because IRS is prohibited by law from using tax return information for nontax purposes, with only a few exceptions. The confidentiality of tax information is considered crucial for promoting voluntary compliance by taxpayers, and legislative proposals for exceptions have been strictly scrutinized by Treasury before submission to Congress. IRS currently lacks empirical evidence that would support making a case to grant an exception (for example, evidence on the number of potential NBFIs that could be identified from tax data but not from other sources), and IRS has not decided whether it should pursue obtaining access in an effort to develop this evidence. Appendix IV provides more detail on taxpayer disclosures and the criteria the executive branch considers before submitting a proposal to Congress for granting exceptions. In another effort to identify potential NBFIs, FinCEN and IRS have recently agreed to a number of MOUs with state financial regulators to improve coordination and information sharing. Almost all MOUs are less than 2 years old, and according to IRS, FinCEN, and officials representing the states that have signed MOUs, it is still too early to tell how effectively they will be carried out. Successfully implementing these MOUs and sustaining the partnerships they establish will be an ongoing challenge for IRS, FinCEN, and the states involved. For example, states have differing definitions and licensing requirements for MSBs, which can make it difficult to ensure consistency in the reporting of information. Additionally, IRS officials said that meeting the information-sharing requirements in the MOUs is time intensive because it requires manually gathering large amounts of information from different parts of the organization. The benefits to IRS and the states, thus far, have not been determined. IRS, FinCEN, and the states have only recently begun to implement the agreements in the MOUs. Therefore, little has been done to evaluate the usefulness of the information that is being shared. Appendix V provides additional information on the MOUs. IRS does not have a statistically valid risk-based approach for targeting NBFIs for BSA compliance examinations, but it is working on developing such an approach for a segment of MSBs. A risk-based approach is important for selecting NBFIs for compliance examinations because IRS only has resources to examine a small fraction of NBFIs each year. For example, in 2005, IRS completed 3,712 examinations— 3.5 percent of the 107,246 potential NBFIs in its database. A risk-based approach uses statistically valid risk factors to select NBFIs for compliance examinations. Statistically valid risk factors can be used to better target examinations on those businesses that pose the greatest risk for noncompliance with BSA requirements. As a result, IRS would devote fewer of its scarce resources to examining compliant NBFIs. One approach to statistically validating the risk factors involves testing them on a sample of NBFIs representative of the population and determining the extent to which the results correlate with businesses’ actual noncompliance with BSA requirements. IRS already uses a risk-based approach when selecting individual tax returns for audit. Its approach involved statistically validating a set of risk factors using a relatively small but representative sample of individual tax returns. IRS now uses those risk factors to select individual tax returns for audit from the entire population. We, as well as OMB and TIGTA, have recognized the value of risk-based approaches. Earlier this year, we reported that risk management, including risk assessment, is a widely endorsed strategy for helping managers and policymakers make decisions about allocating finite resources and taking actions under conditions of uncertainty. OMB also recommends making decisions based on risk assessments. As far back as 1986, we concluded that BSA regulators would use their resources better by targeting examinations on entities with a high potential for problems. In 2004, TIGTA reported that a risk-based, data-driven process to select the potentially most noncompliant MSBs for compliance checks could be a more effective selection method than IRS’s existing process. IRS’s approach for selecting NBFIs for examination is based mainly on the judgment and experience of IRS managers and examiners. Based on that judgment and experience, IRS’s Office of Fraud/BSA has developed a set of risk factors that assist in prioritizing and selecting NBFIs for examination. However, the judgment and experience of managers and examiners is based on past compliance cases that are not a representative sample of NBFIs. Further, IRS studied the risk factors to help develop rules for case selection and used experienced examiners to score these factors based on their potential for producing cases involving noncompliant businesses. IRS has not conducted a test to statistically validate these risk factors. IRS recognizes that its risk factors have not been tested and validated. It has a research project under way to test whether the current risk factors are more effective than chance at identifying noncompliant MSBs. IRS selected a random sample of potential MSBs from CBRS. Then each MSB in the sample was scored for risk of noncompliance using the risk factors. Beginning in January 2007, IRS will examine each MSB in the sample to determine whether actual noncompliance exists. The examination results will be compared to the risk scores to determine the effectiveness of the risk factors at predicting noncompliance. The results could also be used to make improvements to the factors. The research project is slated for completion in December 2007. If the project is completed on time, IRS officials expect any changes made to the risk factors would go into effect in time to guide the selection of cases for examination in calendar year 2008. IRS’s research project is a step in the right direction. For MSBs in CBRS, it will provide empirical validation for IRS’s current risk factors or a basis for improving them. However, this risk-based approach will continue to have limitations, including the following. IRS’s research study was not designed to be representative of all the potential MSBs identified by IRS. IRS is testing the validity of the risk- based selection process by sampling from a subpopulation of potential MSBs, not the entire population. The study samples from a list of 59,701 potential MSBs entered into CBRS in 2004 or 2005 because they either filed BSA-required reports, such as MSB registrations, CTRs, and SARs, or were named in such reports by third parties. However, the population of potential MSBs that IRS has identified is larger. IRS has approximately 105,710 potential MSBs in the Title 31 database and is responsible for determining whether all of them are complying with BSA. According to IRS officials, IRS did not draw from the Title 31 database to conduct this study because inconsistency in the quality and completeness of the information it contains on NBFIs limited its usefulness as a reliable source. IRS’s decision to use CBRS as the source of the study is a valid one. However, because the research study does not address the entire known population, IRS will not know how useful the risk factors are for producing cases within the segment of the population it did not study. IRS does not have plans for validating the risk factors for the entire known population of MSBs. IRS’s risk-based approach to selecting MSBs for compliance examinations necessarily ignores the unknown part of the population. As discussed previously, there is widespread agreement that despite its efforts to date, IRS has not identified all MSBs. As IRS uses new information sources and methods to identify additional MSBs, the risk factors may not take into account the characteristics of these previously unidentified MSBs. The only way to ensure IRS is adapting its risk-based selection process to reflect changes in the identified population of MSBs is to continue updating its risk assessments. IRS does not have plans for reassessing the validity of the risk factors as additional MSBs are identified. IRS’s study and the risk factors applied are only applicable to MSBs and do not take into account the risks of other NBFIs. IRS does not have a statistically validated risk-based approach for selecting casinos, wholesale jewelers, or insurance agents for examination. In addition, as more types of NBFIs are required to comply with BSA requirements, IRS will be required to incorporate those businesses into its compliance examination efforts. From a long-term perspective, a risk-based approach that looks across the different segments of the NBFI population could result in a more effective use of resources for compliance examination. IRS does not have plans for a risk assessment of the full range of NBFIs. Addressing the limitations in IRS’s current risk-based approach for targeting NBFIs for examination will require time and resources. Identifying unknown NBFIs is inherently challenging and gradual—no easy solution exists for addressing this problem. Compliance research is costly; IRS estimates the research that is currently under way will cost approximately $1.7 million. Furthermore, IRS’s ability to mount separate efforts to deal with the range of limitations will be constrained by management capacity and research capacity. The benefits of a statistically valid risk-based approach to ensuring compliance are potentially very great. The nation would have data-based assurance that the NBFI compliance examination program is targeting its resources where the risks of NBFI noncompliance, and the resulting lack of reporting about suspicious financial transactions, are known to be greatest. In October 2004, IRS established the Office of Fraud/BSA within SB/SE. This office is responsible for ensuring NBFIs comply with BSA requirements. IRS appointed an executive to oversee the office. This executive reports directly to the SB/SE Commissioner. The establishment of this office came, in part, in response to TIGTA findings that IRS needed to strengthen oversight of the BSA compliance program. For example, prior to reorganizing, IRS did not have examiners dedicated specifically to conducting BSA compliance examinations. Instead, according to IRS officials, examinations were conducted by tax examiners who split their time among tax examinations, BSA examinations, and collections activities. With the establishment of the Office of Fraud/BSA, IRS dedicated over 300 staff in 33 field offices specifically to conducting BSA compliance examinations. The dedication of these staff reflects IRS’s decision to place a greater priority on meeting its BSA examination responsibilities. Since establishing the Office of Fraud/BSA and dedicating staff specifically to BSA issues, IRS has centralized and increased uniformity of BSA compliance examinations. However, the program still has management limitations and the improvements do not address the significant problems that IRS has in identifying NBFIs and targeting compliance examinations. Table 2 shows the improvements IRS management has made and some remaining management limitations. Before establishing the Office of Fraud/BSA, IRS did not have centrally managed, or consistently implemented, BSA examination policies and procedures. IRS lacked formal guidance for documenting BSA compliance examinations and determining whether a case warranted referral for civil or criminal enforcement by FinCEN or CI, respectively. Since establishing the Office of Fraud/BSA, IRS has established uniform instructions that compliance examiners use for requesting records and examining institutions for compliance with BSA requirements. Additionally, IRS has developed better procedures for determining whether a case has enough support to warrant a referral for civil enforcement by FinCEN or criminal enforcement by CI. According to FinCEN officials, the documentation for cases referred for civil penalty assessment has improved significantly as a result of these changes. CI officials have also noticed improvements in case documentation and referrals that they attribute to the establishment of the new organization. However, many of the changes to the processes and guidance have not been incorporated into the Internal Revenue Manual—IRS’s official internal policies and procedures document resource. Instead, many of IRS’s new or revised policies and procedures are distributed to compliance examiners via memorandums and electronic mail. Distributing guidance in this manner makes it difficult to keep track of the changes and ensure consistent understanding and implementation over the long term. IRS recognizes these challenges and has slowly made progress in generating an update, but this process began in 2004 and was not complete as of November 2006. IRS could not provide a definitive deadline for when the updated Internal Revenue Manual would be published. IRS’s outreach is conducted by the SB/SE Stakeholder Liaison Office. The liaison office works with FinCEN in coordinating the development and distribution of standardized and consistent information through brochures, newsletters, presentations, and other materials. However, IRS has not provided the NBFI community with a comprehensive source of information that can be used to guide efforts to develop a program that meets BSA requirements. In June 2005, the Federal Financial Institutions Examination Council (FFIEC) addressed this issue for the agencies responsible for conducting BSA examinations of banks and similar financial institutions. FFIEC, with support from FinCEN, developed the Bank Secrecy Act/Anti-Money Laundering Examination Manual. Although this manual is intended to guide examiners when examining financial institutions for compliance with BSA requirements, the banking industry has applauded its development and publication because it makes examination procedures transparent and provides excellent guidance on what is expected of banks. Despite agreement by FinCEN and IRS that a similar manual is needed for the NBFI community, such a manual has not been developed. According to IRS officials, they have recently hired a training coordinator who will be responsible for developing this manual. However, no timeline has been established for when the process for developing this manual will begin. Prior to the establishment of the Office of Fraud/BSA, the management of BSA compliance program information was decentralized. Each of the 16 field offices maintained its own, separate lists of potential NBFIs and information on the examinations it was conducting. Once the new office was established, IRS took steps to combine all of this information into one centralized database, the Title 31 database. The Title 31 database, however, was not built using a disciplined systems development process and is not supported by IRS Modernization and Information Technology Services (MITS). As a result, the database potentially contains duplicate, outdated, and sometimes inaccurate information from the 16 merged systems. IRS officials believe it has addressed many of these issues but could not validate that all have been addressed. Further, IRS officials stated that the database has other limitations, including (1) limited capacity to handle the number of fields required to maintain and close cases, (2) issues with connectivity across field locations, (3) limited controls to prevent the entry of invalid information, and (4) system instability. IRS has obtained MITS support in creating a new system to maintain the information in the Title 31 database. However, IRS will continue operating within existing system constraints until the new system is fully operational. IRS has made progress in tracking and measuring program activities, but lacks a measure of the extent to which NBFIs comply with BSA requirements. Prior to the new organization, IRS had only one consistently measured performance goal for the BSA compliance program—delivery of direct examination staff years. In a 2004 review, TIGTA found that IRS needed to establish performance indicators that measure case results and their cumulative impact on compliance. For fiscal year 2005, IRS established a suite of measures that it is using to track and assess program performance. Table 3 lists these measures and the fiscal year 2005 results and fiscal year 2006 goals and results. IRS performance measures in table 3 do not provide information on the rate of NBFI compliance. Although measuring compliance rates can be challenging, IRS has done so for taxpayer compliance of individuals under Title 26. IRS’s research to validate the risk factors it uses to target MSB examinations could also be used to estimate a compliance rate for MSBs in CBRS. This compliance rate would not be generalizable to the entire MSB or NBFI population; however, it would allow IRS to get a better understanding of the extent to which the MSB population captured within CBRS complies. Without a measure of the compliance rate, IRS and external parties such as Congress will not know the effect, over time, of IRS’s efforts to ensure compliance. IRS has no plans to measure the NBFI compliance rate. FinCEN and IRS have taken a number of steps to improve efforts to ensure that NBFIs comply with BSA, but they lack a documented and coordinated strategy for moving forward. Our previous discussion shows that many additional steps could be taken to identify the population of NBFIs, ensure compliance of those NBFIs that have been identified, and strengthen management of IRS’s BSA compliance program. Addressing these limitations will be challenging and will take time. The challenges are compounded by the fact that the types of NBFIs that are IRS’s responsibility under the law are growing. Some actions to address these challenges could be taken by the agencies individually, but others will require a coordinated approach to be effective. Further, limited resources and time constraints mean that additional actions will have to be prioritized, alternatives will need to be considered, and trade-offs may need to be made. FinCEN and IRS do have some elements of a strategy to guide future efforts. However, FinCEN and IRS do not have a documented and coordinated strategy that prioritizes actions, lists time frames, and explains resource needs over multiple years. Without a strategy that prioritizes and guides IRS and FinCEN’s collective efforts to improve NBFI compliance, the risk is greater that noncompliance will go undetected and uncorrected. Noncompliance by NBFIs means that suspicious financial transactions, such as money laundering and terrorist financing that occur at these institutions, might go undetected. CI investigates individuals and businesses, including financial institutions, for BSA and money laundering violations, usually in conjunction with other tax law violations. BSA investigations constituted roughly 12 percent of CI’s direct investigative time in fiscal year 2006. Full-time equivalents (FTE) dedicated to BSA enforcement from 2002 to 2006 remained relatively unchanged, as shown in table 4. CI highlighted enhancing BSA compliance in its strategy and program plan for fiscal years 2005 through 2006. In the plan, CI outlines its strategies to support IRS’s strategic plan goal to enhance enforcement of tax laws. One of CI’s major compliance strategies involves effectively working with Treasury, the Department of Justice and other law enforcement partners among other things, to enhance BSA compliance efforts. CI recently introduced new performance measures based, in part, on a previous TIGTA report and an OMB review. During the OMB review, Treasury, CI, and OMB jointly determined that the old measure of completed investigations was insufficient to measure program effectiveness. As a result, CI introduced three new annual performance measures: the number of convictions (a measure of impact on compliance), the conviction rate (a measure of quality of investigations), and conviction efficiency (a measure of cost efficiency). CI reported 296 convictions for BSA violations during fiscal year 2006. From fiscal years 2002 through 2006, convictions increased about 23 percent. CI investigates individuals and businesses for BSA or money laundering violations, but according to CI officials, agents do not typically investigate many financial institutions for Title 31 violations. Generally, if an institution is the subject of an investigation, it is for failure to have an anti- money laundering program in place or because an individual within the institution is causing the institution to not file required forms. According to CI officials, structuring is the most common type of BSA violation CI investigates among individuals. Structuring occurs when a person conducts or attempts to conduct currency transactions at financial institutions for the purposes of evading the reporting requirements of BSA. Many BSA investigations involve structuring, failure to file reports on transactions or bulk cash, and smuggling activities, according to CI officials. BSA criminal violations are usually investigated in conjunction with other tax violations, according to CI officials. In one recent case, a sales executive for an international telecommunications company was sentenced to 24 months in prison and fined $20,000 in a money laundering case involving cash deposits. The sales executive structured bank deposits and made 31 cash deposits totaling over $250,000 to accounts in two different banks to avoid currency transaction reports being filed to IRS. The sales executive forfeited $59,400 and filed amended income tax returns to report an additional $250,000 in income that he was attempting to hide with his structuring activity. The case was developed from information reported in SARs. BSA convictions increased from fiscal years 2002 through 2006. Likewise, investigations completed and prosecutions recommended increased during the same period. Table 5 shows CI’s BSA investigations initiated, investigations completed, prosecutions recommended, and convictions. CI is a big user of BSA data and IRS’s database that stores the data— CBRS. CI’s enforcement mission coupled with being organizationally located within IRS places it in a unique position for utilizing BSA data. CI queries CBRS more than any other federal, state, or local agency. During fiscal year 2005, CI made about 57 percent of the over 1.5 million queries made of the system. Additionally, CI was responsible for more than 66 percent of the document viewing activity in CBRS. During 2006, CI transitioned to a new Web-based version of CBRS. CI officials reported the system has advantages for improving CI’s ability to develop investigative leads. One advantage is the ability to conduct searches within narratives on BSA reports. Analysts and investigators can now search narratives on SARs, for instance, for specific words and were unable to do so under the old CBRS system. Another advantage cited is the ability to better use downloads of SAR data. With the Web-based system, an analyst or investigator can put downloads in Access or Excel. Once the data are in a spreadsheet or database management applications program, analysts or investigators can easily look for trends in certain addresses or occupations. With the old CBRS system, the analyst had to print out downloads and manually look at the different fields of information from SARs. In 2003 FinCEN began an effort to reengineer BSA data management activities However, the cornerstone of FinCEN’s reengineering effort, BSA Direct R&S, was permanently halted because of a multitude of problems. FinCEN made two mistakes in the early stages of its effort to reengineer BSA data management activities: it began reengineering without a comprehensive implementation plan and did not adequately communicate and coordinate with IRS. According to our Business Process Reengineering Assessment Guide, before an agency initiates business process reengineering, a comprehensive implementation plan should be developed that spells out the work that needs to be done. This plan should include time frames, milestones, decision points, and resource allocations. Although FinCEN commissioned a series of studies to examine and recommend an approach to reengineering BSA data management activities, these studies were only recommendations and did not constitute a comprehensive plan for conducting the reengineering effort. Instead, FinCEN made the decision to move forward with one aspect of the broader reengineering effort, BSA Direct R&S, before establishing a comprehensive plan. FinCEN commissioned the MITRE Corporation to develop a comprehensive reengineering plan that would serve as a road map for the reengineering effort after the BSA Direct R&S project was well under way. Further, this plan was developed under the assumption that BSA Direct R&S would be completed successfully. FinCEN expected BSA Direct R&S to be the center of FinCEN’s broader reengineering effort and serve as the catalyst for its execution. FinCEN intended to establish the technology for implementing the reengineering effort before establishing the reengineering plan itself. We have found in examining reengineering and technology acquisition efforts that technology is an enabler of process reengineering, not a substitute for it. We have also found that acquiring technology in the belief that its mere presence will somehow lead to process innovation is a root cause of bad investments in information systems. FinCEN’s decision to implement one aspect of the reengineering effort, BSA Direct R&S, before developing a comprehensive plan for conducting the broader effort exemplifies this problem. FinCEN viewed BSA Direct R&S as a strategic initiative, as it was intended to eventually interface with other systems in order to facilitate all BSA reporting and data related processes from IRS to FinCEN over time. FinCEN did not adequately communicate and coordinate its BSA data management reengineering efforts with IRS, namely efforts to develop new information systems used to house and disseminate BSA data. Had better communication and coordination occurred, a more effective technology and business solution might have been achieved. The cornerstone of FinCEN’s effort to take control of all BSA data management responsibilities was the development of BSA Direct R&S, a new information system that was to store and disseminate all BSA data. At the same time, IRS developed its own system, WebCBRS, with many of the same capabilities. FinCEN did not actively engage in discussions with IRS about WebCBRS as it was being developed. FinCEN, IRS, and Treasury all have a role in the reengineering effort. However, FinCEN’s goal is to take over all BSA data management responsibilities currently conducted by IRS. Therefore, FinCEN is driving the reengineering effort and has responsibility for communicating and coordinating its activities to the other agencies. Key moments in the development of these two systems are documented in figure 2. This page is left intentionally blank. In examining the above timeline, we identified at least three missed opportunities early in the implementation of the two projects where better planning and coordination might have resulted in more effective and efficient systems development efforts: In April 2002, Treasury, with FinCEN’s input, recommended IRS maintain its role in BSA data management; yet over the next 2 years FinCEN decided to pursue alternative approaches while IRS initiated the transfer of BSA data to WebCBRS, a new system. In the fall of 2003, FinCEN decided to launch the BSA Direct project just a month before ECC-DET at IRS secured additional funding and accelerated the development of WebCBRS with an anticipated completion of 2006 instead of 2009. FinCEN, however, justified the need for BSA Direct without fully accounting for (1) the expected capabilities that IRS’s WebCBRS system would provide and (2) IRS’s revised and more aggressive conversion schedule. For example, part of FinCEN’s justification to OMB for BSA Direct was that it would allow IRS to discontinue the development of WebCBRS, potentially resulting in financial savings for the agency. However, officials at both FinCEN and IRS said no discussion on discontinuing IRS’s effort ever took place before this justification was presented. In December 2004, the Chief Information Officer (CIO) of Treasury issued a memorandum documenting key agreements between the department, IRS, and FinCEN on the future of BSA data management, but it is unclear how some of these agreements were actually implemented. For example, an agreement stated that IRS would be a preferred user of FinCEN’s system, yet IRS officials stated that they remained uninformed throughout the process about their current and future access to BSA data. Additionally, an agreement stated that the Treasury CIO would lead a joint effort to identify, eliminate, and prevent any potential duplication of efforts. However, no information was provided to demonstrate how this agreement was to be carried out. BSA Direct R & S failed, in part, because project management issues continued throughout the project’s life and were not adequately addressed by agency executives. On March 15, 2006, the Director of FinCEN placed the BSA Direct R & S project under a temporary “stop work” order because of significant cost, schedule, and performance issues. Over the following 4 months, FinCEN reassessed the project with the assistance of two outside consultants. Then, on July 12, 2006, the Director decided to permanently halt the project because of a multitude of problems. Among these were inadequate project governance and a lack of demonstrated project management expertise by the project contractor and FinCEN. In a previous review we found that FinCEN did not always apply effective investment management processes to oversee the BSA Direct R&S project. This, in part, contributed to the problems experienced by the project, because issues that occurred at the project management level continued and were compounded, yet were not addressed at the executive level. For example, the MITRE Corporation—the organization assisting FinCEN with project monitoring—identified multiple occasions where FinCEN did not take action to mitigate project risks or address significant descoping of project functionality. BSA Direct R&S repeatedly missed program milestones and performance objectives and exceeded the project budget. The original cost estimate of $8.9 million for the prime contract increased to $15.1 million. Of that amount, $14.4 million was spent. FinCEN estimates that an additional $8 million would be required for operations and maintenance. Also FinCEN could not ensure that any additional investment would achieve the desired product. Therefore, FinCEN terminated the project and is currently formalizing a replanning effort for BSA Direct R&S, to include strategic, technical, and resource planning issues, as well as stakeholder analysis; evaluating the discrete elements of BSA Direct R&S for salvageability; and developing a road map to achieve BSA Direct R&S in steps, as a program with multiple projects, both business and technology oriented. In our previous review we noted that the problems with BSA Direct R&S indicate systemic problems with FinCEN’s management and oversight of information technology projects. As a result, the Subcommittee on Transportation, Treasury, Housing and Urban Development, the Judiciary, and Related Agencies, Senate Committee on Appropriations, directed FinCEN to ensure it has an executive-level review process for information technology projects. We also recommended that FinCEN develop a plan for managing BSA Direct that focuses on establishing policies and procedures for executives to regularly review investments progress against commitments and take corrective actions when these commitments are not met. In October 2006, FinCEN developed an interim information technology management improvement plan that acknowledges that these and other actions are needed to build its information technology management capabilities. However, the plan focuses on improving FinCEN’s information technology management capabilities but does not address FinCEN’s broader efforts to reengineer BSA data management activities. Based on past issues, FinCEN will continue to face challenges in building information technology management capability, while at the same time continuing efforts to reengineer and transition BSA data management processes. The MITRE Corporation, prior to the failure of the BSA Direct project, characterized reengineering of BSA data management as a daunting effort, in part, because it involved highly interdependent tasks that must be conducted under short implementation time frames. The decision to discontinue the BSA Direct R&S project provides FinCEN with an opportunity to take a more deliberate and disciplined approach to implementing the effort to reengineer BSA data management activities. FinCEN and IRS play important roles in the national effort to combat money laundering and terrorist financing activity. Both have recently taken significant steps to make their efforts more effective; however, a great deal more could and should be done. FinCEN and IRS have taken action to improve NBFI compliance with BSA requirements, but making significant progress in identifying NBFIs and ensuring that they comply with BSA requirements is a long-term effort with no simple solutions. In some cases, IRS, FinCEN, or both have actions under way but no timetable for finishing. In other cases, action has yet to begin. Some of these actions include deciding whether to pursue gaining access to taxpayer information, clarifying the definition of an MSB, updating the Internal Revenue Manual, developing an NBFI compliance examiner’s manual, creating a more functional and secure mechanism for storing NBFI data, and developing a NBFI BSA compliance measure. These actions have not been completed, in part, because of competing priorities. However, without a coordinated, documented strategy that guides the agencies’ approach over time, the agencies do not have assurance they are moving in the right direction and are limited in their ability to measure progress in achieving improvements. Furthermore, Congress and the public will have difficulty understanding the overall approach that IRS and FinCEN are taking to ensure that NBFIs are complying with BSA. To date, FinCEN’s effort to reengineer and transition BSA data management activities has not been successful. The failure of BSA Direct R&S was a considerable setback in this effort. However, FinCEN is now in a position to reassess the goals of the reengineering effort and develop a comprehensive long-term strategy. FinCEN and IRS must also find ways to improve communication and coordination as FinCEN proceeds with its effort to reengineer BSA data management activities. Moving forward, FinCEN will need to take a measured and disciplined approach to strengthening its ability to oversee and manage information technology projects. Significant changes, such as FinCEN’s data management reengineering effort, are complex and slow to implement, requiring a long- term, but flexible, strategy and a strong and consistent focus to be successful. To improve BSA compliance, we are making the following 8 recommendations. The Secretary of the Treasury should direct the Director of FinCEN and the Commissioner of Internal Revenue to develop a documented and coordinated strategy that outlines priorities, time frames, and resource needs for better identifying and selecting NBFIs for examination. This strategy should include the full complement of actions that FinCEN and IRS can take to build a more effective BSA compliance program, including the specific compliance program recommendations we make below. The Director of FinCEN should establish a time frame for revising MSB regulations and guidance, including registration requirements. The Commissioner of Internal Revenue should decide whether to pursue gaining access to taxpayer data for better identifying NBFIs. The Commissioner of Internal Revenue should direct the Office of Fraud/BSA to build upon the study to validate compliance risk factors by developing a plan to assess the noncompliance risks posed by all NBFIs; establish time frames for finalizing and publishing the Internal Revenue Manual with updated BSA compliance program policies and procedures; develop a NBFI compliance examiner’s manual that examiners can use to guide examinations and businesses can use to ensure they are in compliance with BSA requirements, and establish time frames for its publication; create a more functional and secure mechanism for storing and accessing the information contained in the Title 31 database; and use the results of the forthcoming risk factor validation study to estimate the compliance rate for the population of MSBs from which the study sample was drawn. To improve BSA data management, we recommend the following: The Director of FinCEN, in cooperation with the Commissioner of Internal Revenue, should develop and implement a comprehensive and long-term plan for reengineering BSA data management activities before moving forward with the BSA Direct R&S project. This plan, at a minimum, should take a broad and crosscutting approach to the reengineering effort, and not focus solely on one component, such as BSA Direct; include short- and intermediate-term goals for reengineering BSA data management processes, including the transition of IRS’s data management responsibilities to FinCEN; and incorporate collaboration strategies into the plan by clearly defining the role of IRS’s ECC-DET in the transition process and more actively involving it as a key stakeholder in the reengineering effort. The Director of FinCEN and the Commissioner of Internal Revenue jointly provided written comments on a draft of this report in a letter dated December 11, 2006 (which is reprinted with its enclosures in app. VI). FinCEN and IRS agreed with all our recommendations. The Director and Commissioner also stated their appreciation that our report notes the steps that FinCEN and IRS have already taken to improve BSA compliance. They highlighted staff attrition as another challenge faced by the program. The Director and Commissioner also raised some issues about the difficulty in drawing a correlation between IRS’s process for selecting tax returns for audit and selecting NBFIs for BSA compliance examination, but we view IRS’s tax audit case selection process as a potentially useful model for selecting cases—even if the audits are for other purposes. While agreeing with our first recommendation, the Director and Commissioner expressed concern that we did not recognize the efforts that they have already taken to better identify and select NBFIs for examination. However, IRS’s Workload Identification Process, which they cite, has not yet been funded. Further, our report recognizes the use of BSA information in the CBRS system—which includes SARs. Additionally, we acknowledge efforts to improve coordination of BSA activities with the states through MOUs. If you or your staff has any questions, please contact me at (202) 512-5594 or whitej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. To describe the Internal Revenue Service’s (IRS) and the Financial Crimes Enforcement Network’s (FinCEN) Bank Secrecy Act (BSA) related roles and responsibilities, we reviewed and summarized relevant legislative and regulatory authorities. We also reviewed BSA rules and guidance, agency reports, and strategic planning documents. Further, we interviewed officials at FinCEN and IRS Small Business Self-Employed Division (SB/SE) and IRS Criminal Investigations Division (CI), and the IRS Enterprise Computing Center at Detroit (ECC-DET). We examined the information obtained to determine the BSA roles and responsibilities at FinCEN and IRS, changes to these roles over time, and the potential for overlap and duplication of responsibilities. To determine the extent to which IRS has been effective in managing its BSA compliance program and coordinating with FinCEN, we reviewed relevant legislative and regulatory authorities. We analyzed data on program performance and compared estimates of the nonbank financial institutions (NBFI) population. We compared IRS’s approach for selecting NBFIs for compliance examinations to the approach it uses for examining individual tax returns, as well as to guidance from the Office of Management and Budget, GAO, and others. We applied our criteria for internal controls to the Title 31 database IRS used to house and store data for BSA examination cases. We reviewed strategic planning documents related to BSA compliance examination and program management, including the Internal Revenue Manual, FinCEN and IRS strategy and program plans, and expenditure documents. We reviewed Treasury Inspector General for Tax Administration (TIGTA) and the Department of the Treasury (Treasury) Office of Inspector General (OIG) reports and Treasury’s response and disposition on recommendations made. We also reviewed the Federal Financial Institutions Examination Council manual established for federal banking supervisors to ensure that the banks have consistent application of BSA requirements. To obtain information on the total population of NBFIs in the United States for which IRS has BSA compliance examination responsibility, we reviewed reports from Coopers & Lybrand, KPMG, and Treasury’s OIG and Federal Register notices of the interim and final reports that contained information on the additional BSA industries IRS will be responsible for regulating. We also reviewed documentation on IRS’s examination and referral processes and IRS’s performance measures, including the number of cases closed, number of referrals, cycle time, hours per case, number of new cases initiated, and cases in inventory. We examined IRS’s BSA case selection criteria and the Title 31 database used to house and store data for BSA examination cases. We examined the memorandums of understanding (MOU) established between FinCEN and IRS, FinCEN and the states, and IRS and the states. We used our report on key collaboration practices as criteria for assessing IRS’s and FinCEN’s efforts to collaborate with each other and the states. We interviewed IRS SB/SE officials involved with BSA examinations; BSA case selection; and the SB/SE Stakeholder Liaison office involved in outreach and education for NBFIs, FinCEN regulatory policy officials, officials from Treasury’s OIG and TIGTA, and officials from the BSA Advisory Group and the Conference of State Banking Supervisors. To describe CI’s BSA role, we reviewed legislative and regulatory authorities, agency reports, strategic planning documents, internal policies and processes for conducting investigations and making BSA case referrals, and the 1999 Webster Commission Report. We also reviewed CI’s statistics for BSA-related staffing resources and caseload, including full- time equivalents, closed cases, cases with violations, and referrals to FinCEN. We interviewed officials from CI, SB/SE, FinCEN, the Department of Justice Asset Forfeiture and Money Laundering Section, and the Department of Homeland Security Immigration and Customs Enforcement on use of BSA data and access to BSA data. We assessed the reliability of IRS’s Criminal Investigation Management Information System—a database containing nationwide data on the status of CI investigations: how CI agents use direct investigative time; the number and type of staff on board; and the inventory of equipment. Our assessment included reviewing existing information about the data and the system that produced them and interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. To assess the effectiveness of FinCEN’s efforts to reengineer BSA data management activities, we reviewed and analyzed BSA Direct planning and implementation documents and interviewed agency officials at IRS and FinCEN and some users of BSA information, such as federal law enforcement agencies. We also reviewed project documents such as the Office of Management and Budget Exhibit 300, the original BSA Direct contract and revisions, progress reports, interim briefings, and project assessments conducted by the MITRE Corporation. We also interviewed FinCEN officials responsible for investment management and the BSA Direct project, the contractor conducting the BSA Direct project, and MITRE Corporation officials involved in the project. In a previous review, we also examined FinCEN’s application of information technology investment management processes to the retrieval and sharing component of the BSA Direct project using our guide, Information Technology Investment Management: A Framework for Assessing and Improving Process Maturity. We did not conduct a comprehensive review of FinCEN’s investment management practices. We focused on critical processes associated with stage 2 of the five-stage framework because they represent the practices needed for basic project-level control. We performed our review from July 2005 through November 2006 in accordance with generally accepted government auditing standards. Form used by certain MSB to register with FinCEN. Reports that describe insider abuse of financial transactions of any amount and type that financial institutions suspect may be unusual or irregular, violations of $5,000 or more where a suspect can be identified or involve potential money laundering, violations aggregating $25,000 or more regardless of a potential suspect, and computer intrusion. Financial/depository institutions. Reports that describe financial transactions that are conducted or attempted by, at, or through an MSB, involve or aggregate funds or other assets of at least $2,000, and the MSB knows, suspects, or has reason to suspect that the transaction (or pattern of transactions of which the transactions are a part) involves funds derived from an illegal activity, is designed to evade reporting requirements, has no reasonable purpose or explanation, or involves the use of the MSB to facilitate criminal activity. Money transmitters; issuers, sellers, and redeemers of traveler’s checks and money orders; and the U.S. Postal Service. Reports that describe financial transactions conducted by, at, or through a casino involving at least $5,000 if they are suspected to derive from illegal activity, are conducted to hide or disguise funds, are designed to evade reporting requirements, have no reasonable purpose or explanation, or involve the use of the casino to facilitate criminal activity. Casinos and card clubs. Reports that describe financial transactions conducted by, at, or through a broker or dealer in securities involving at least $5,000 if they are suspected to derive from illegal activity, are designed to evade reporting requirements, have no reasonable purpose or explanation, or involve the use of the broker or dealer in securities to facilitate criminal activity. Brokers and dealers in securities, futures commission merchants, and futures introducing brokers. Reports that describe each deposit, withdrawal, exchange of currency, or other payment or transfer by, through, or to a financial institution, which involves a transaction in currency of more than $10,000. Transactions reported include those conducted by, or on behalf of the same person, conducted on the same business day, and either a single or multiple currency transaction. Financial and nonfinancial institutions. Reports that describe transactions greater than $10,000 in currency as well as suspicious transactions. In addition, casinos must report suspicious transactions and activities on FinCEN SAR-C. Nevada casinos must file Form 103N, Currency Transaction Report by Casinos - Nevada (CTRC-N)—reports that describe transactions involving more than $10,000 in cash. Also, smaller transactions occurring within a designated 24-hour period that and Nevada casinos with greater than $10,000,000 in annual gross gaming revenue and with over $2,000,000 of table games statistical winnings. aggregate to more than $10,000 in cash are reportable if the transactions are the same types of transactions within the same monitoring area or if different types of transactions occur within the same visit at one location. Reports of cash payments over $10,000 received in a trade or business. Individuals involved in trades or businesses that are not financial institutions. Annual reports of financial interest in foreign accounts if the aggregated value of a foreign financial account exceeds $10,000 at any time during the calendar year. Individuals or depository institutions having an interest in, and signature or other authority over, one or more bank, securities, or other financial accounts in a foreign country. Reports banks file to exempt eligible customers from currency transaction report reporting requirements. Exempt customers include banks, government agencies/authorities, listed companies and subsidiaries, eligible nonlisted businesses with a history of frequent currency transactions, and payroll customers. Depository institutions. Reports the transportation (physically, or mailing and shipping or receipt) of currency into or out of the United States and certain other monetary instruments on any one occasion in excess of $10,000. Individuals, corporations, partnerships, trusts or estates, and associations. Exceptions include (1) businesses serving as agents of another MSB; (2) businesses whose only MSB activity is the issuance, sale, or redemption of stored value; (3) the U.S. Postal Service or agencies of the United States, a state, or a political subdivision of any state; and (4) MSB branch offices. Included within the BSA reporting and record-keeping requirements are MSBs. A business is generally considered to be an MSB if (1) it offers one or more of the following services: money orders, traveler’s checks, check cashing, currency dealing or exchange, and stored value and (2) the business either conducts more than $1,000 in these activities with the same person in one day or provides money transfer services in any amount. Each business (not including branches) that fits within the definition of an MSB is required to register with FinCEN, except for the U.S. Postal Service and other agents of the federal, state, or local governments, and those businesses that are considered MSBs only because they (1) act as agents for other MSBs or (2) act as issuers, sellers, or redeemers of stored value. Certain MSBs are required to file suspicious activity reports for transactions involving at least $2,000 in which the MSB believes or has reason to believe that the transaction (1) involves funds derived from illegal activity or is intended to hide such activity; (2) is otherwise designed to evade the reporting requirements under BSA; (3) has no business or apparent lawful purpose or is not the type of transaction in which the customer would normally be expected to engage; or (4) involves the use of an MSB to facilitate criminal activity. All MSBs are required to develop and implement risk-based BSA compliance programs. MSBs are also required to file currency transaction reports for cash transactions of over $10,000, and must maintain information pertaining to the sale of and verify the identity of those purchasing certain monetary instruments (e.g., money orders and traveler’s checks) valued from $3,000 to $10,000. MSBs must also maintain information on funds transfers of $3,000 or more. One way to improve the IRS’s knowledge of the NBFI population subject to BSA requirements would be to access specific identifying information reported on income tax returns. However, the IRS Office of Fraud/BSA is unable to use taxpayer information to identify businesses that may be subject to BSA requirements. Section 6103 of the Internal Revenue Code, which prohibits IRS from disclosing returns or return information unless a statutory exception applies, does not currently specifically allow disclosure for Title 31 examinations. Over the years, however, Congress has amended section 6103 to allow access to taxpayer information for specific purposes, including disclosure to federal officials for the administration of certain federal laws not relating to tax administration. According to Treasury, the burden of supporting an exception to the section 6103 prohibition should be on the requesting agency, in this case IRS, to make the case for disclosure and provide assurances that the information will be safeguarded appropriately. To date, IRS has not done so. Table 6 lists the criteria Treasury and IRS have applied when evaluating specific legislative proposals. FinCEN and IRS are forging a more collaborative approach to implementing BSA compliance efforts. FinCEN and IRS recognize that a more collaborative approach to BSA compliance will allow them to better leverage interagency and intergovernmental resources. Since 2005, FinCEN and IRS have begun to formalize more collaborative relationships with each other and a number of state regulatory/banking agencies that examine NBFIs for BSA compliance. The principle vehicle for developing these relationships has been the MOUs. These MOUs provide formalized procedures for coordinating BSA activities and sharing information. Separate MOUs between FinCEN and IRS, FinCEN and 42 state regulatory/banking agencies and Puerto Rico, and IRS and 34 state regulatory/banking agencies and Puerto Rico have been signed. The MOU between FinCEN and IRS establishes procedures for the exchange of information between the two agencies with the goal of enforcing BSA compliance. The MOU dictates that IRS provide a wide range of information to FinCEN through quarterly and annual reports, including new or revised examination policies, procedures, or guidance and quantitative data on examinations conducted, violations discovered, and referrals made. The MOU dictates that FinCEN will provide IRS with information on enforcement actions and analytical products on patterns and trends as well as provide technical and analytical assistance in overseeing industry compliance. MOUs between FinCEN and 42 states and Puerto Rico have been signed in an attempt to advance the sharing of information and enhance uniform application of BSA. FinCEN expects to receive information on businesses examined and enforcement actions taken. In exchange, the states expect to receive analytical tools from FinCEN that will maximize resources and highlight areas and businesses with higher risk for money laundering. Both FinCEN and the states expect the agreements to help them improve the coordination of collective actions and concerns by providing a clearer picture of the various financial industries regulated. IRS has signed MOUs with 34 states and Puerto Rico to establish information sharing to assist in the examination of MSBs and other NBFIs. The IRS/State MOUs involve the coordination of examination activities and the sharing of examination procedures, schedules, and lists of MSBs. These MOUs are different from the MOUs between FinCEN and the states because FinCEN’s agreement involves FinCEN sharing analytical information gathered from various regulators. By collaborating with the states, IRS hopes to improve the quality and coverage of compliance examinations and make better use of examination resources. The agreements established in the MOUs are intended to eliminate duplicative examination efforts and regulatory requirements, and build greater quality and consistency through training. IRS, FinCEN, and the states have only recently begun to implement the agreements in the MOUs. In addition to the above contacts Signora May, Assistant Director; Sean Bell; Brian James; Katrina Taylor; and Shamiah Woods made significant contributions to this report. Danny Burton, Evan Gilman, Timothy Hopkins, Shirley Jones, Barbara Keller, Jeffrey Knott, Donna Miller, and Sabine Paul also made key contributions. | In 2005, over 16 million Bank Secrecy Act (BSA) reports were filed by more than 200,000 U.S. financial institutions. Enacted in 1970, BSA is the centerpiece of the nation's efforts to detect and deter criminal financial activities. Treasury's Financial Crimes Enforcement Network (FinCEN) and the Internal Revenue Service (IRS) play key roles in BSA compliance, enforcement, and data management. GAO was asked to describe FinCEN's and IRS's roles and assess their effectiveness at ensuring BSA compliance and efforts to reengineer BSA data management. FinCEN and IRS have distinct roles, but share some responsibilities in implementing BSA. FinCEN's role is to oversee the administration of BSA by numerous agencies including IRS. IRS's role is to (1) examine nonbank financial institutions (NBFI), such as money transmitters and check cashers, for compliance with BSA; (2) investigate potential criminal BSA violations; and (3) collect and store BSA reported data by all financial institutions. IRS continues to face challenges in identifying NBFIs subject to BSA and then using its limited resources to ensure compliance. First, IRS has identified approximately 107,000 potential NBFIs, yet FinCEN, IRS, and others agree there is a portion of the NBFI population IRS has not identified. Identifying NBFIs is inherently challenging and made even more difficult because FinCEN regulations about who is covered are confusing, especially for smaller businesses. Second, IRS currently lacks, but is working to develop, a statistically valid risk-based approach for selecting NBFIs for compliance examinations. IRS only examines a small fraction of NBFIs, less than 3.5 percent in 2005, highlighting the need for building risk into the selection process. IRS is statistically validating a risk-based approach for targeting compliance examinations on certain NBFIs suspected of noncompliance. IRS's validation study is a step in the right direction, but IRS's approach will continue to have limitations because the study was not designed to be representative of all potential NBFIs. And lastly, IRS established a new office accountable for BSA compliance, and is working to improve examination guidance. However, IRS's management of BSA compliance has limitations, such as a lack of a compliance rate measure and a comprehensive manual that NBFIs can use to develop anti-money laundering programs compliant with BSA. Addressing program challenges, such as identifying NBFIs and examining those of greatest risk of noncompliance will take time and require prioritizing actions and identifying resource needs. However, FinCEN and IRS lack a documented and coordinated strategy with time frames, priorities, and resource needs for improving NBFI compliance with BSA requirements. FinCEN has undertaken a broad and long-term effort to reengineer, and transition from the IRS, all BSA data management activities. FinCEN, however, missed opportunities to effectively plan this effort and to coordinate its implementation with IRS. For example, FinCEN began making significant investments in information technology projects before a comprehensive plan to guide the reengineering effort was in place. When a key project--BSA Direct Retrieval and Sharing--failed, it jeopardized the future of the broader reengineering effort. After investing over $14 million (nearly $6 million over the original budget) in a failed project, FinCEN is now reassessing BSA Direct but does not yet have a plan for moving forward with the broader effort to reengineer BSA data management activities. |
In 1991, Congress enacted TCPA to address a growing number of telephone marketing calls and certain telemarketing practices thought to be an invasion of consumer privacy and, in some cases, costly to consumers. Provisions of this law generally prohibit anyone from faxing unsolicited advertisements, or “junk faxes,” to consumers or businesses. An unsolicited advertisement under TCPA was defined as “any material advertising the commercial availability or quality of any property, goods, or services which is transmitted to any person without that person’s prior express invitation or permission.” In addition, there are three distinct enforcement mechanisms for violations of the junk fax provisions. First, persons or entities that believe they have been sent a fax in violation of the act have a private right of action—that is, they can sue the fax advertiser in an appropriate court for damages and/or injunctive relief. Second, a state attorney general (or another official or agency designated by the state) may bring a civil lawsuit for damages and/or injunctive relief when a case involves a pattern or practice of violations. Third, FCC is authorized to assess and enforce a “forfeiture” against those who violate the junk fax provisions—that is, a monetary penalty against the faxer for violating the junk fax rules. Appendix I provides a brief overview of how unsolicited advertisements sent via telephone, the Internet, and cellular telephones are regulated and enforced. “…a prior or existing relationship formed by a voluntary two-way communication between a person or entity and a residential subscriber with or without an exchange of consideration, on the basis of an inquiry, application, purchase or transaction by the residential subscriber regarding products or services offered by such person or entity, which relationship has not been previously terminated by either party.” In July 2003, FCC revised many of its telemarketing and fax advertising rules under TCPA. In part, the commission reversed its prior conclusion about an EBR, stating that its existence alone does not constitute the express permission required by TCPA. Instead, the commission concluded that a fax advertiser must first obtain written permission, including the recipient’s signature, before a fax can be sent. This requirement for written permission was stayed by FCC pending reconsideration and, to date, has not taken effect. Congress has now settled the question of whether prior written consent is explicitly required with the Junk Fax Prevention Act of 2005. The act (1) amends TCPA and codifies the EBR by expressly permitting businesses or entities to fax unsolicited advertisements to those with whom they have an EBR and (2) provides that prior permission may be in writing or otherwise. The act does, however, impose new disclosure and opt-out requirements on advertisers. Businesses or entities sending fax advertisements must now include on the first page of the ad an opt-out notice, the date and time the fax was sent, the registered name of the company sending the fax, and the telephone number of the company sending the fax or the sending fax machine’s telephone number. The opt-out notice should clearly state that the recipient may opt out of any future faxes and provide clear instructions for doing so. The opt-out telephone number must be domestic and free of charge to the recipient. Some businesses and individuals contract with fax broadcaster (also known as “fax blasters”) companies that transmit mass fax advertisements for others. This practice is legitimate if the fax broadcaster complies with the junk fax rules. In some instances, however, fax broadcasters fax unsolicited advertisements to parties that have no EBR with the advertising company. According to Verizon officials in an FCC filing, fax broadcasters often use automatic dialers on outbound fax servers to send large volumes of faxes in a short time, often in the middle of the night. Furthermore, their dialing lists may include primary residential telephone numbers as well as fax numbers. For example, according to these officials, one fax broadcaster transmitted 10,600 calls over Verizon’s network within 10 minutes. Two FCC bureaus—CGB and EB—are primarily responsible for developing and implementing rules and procedures to collect and analyze junk fax complaints and for conducting investigations and enforcement, among their other responsibilities. CGB develops and implements FCC’s consumer policies. CGB also addresses consumers’ informal inquiries and works to mediate and resolve complaints under FCC’s jurisdiction. These include complaints about the commission’s regulated entities, including common carrier, broadcast, wireless, satellite, and cable companies; complaints about unauthorized changes in telecommunications providers (slamming); complaints about unwanted e-mail messages on wireless devices such as mobile telephones (spamming); and six types of TCPA-related complaints, including junk faxes, violations of the do-not-call list, and time-of-day violations (marketing between 9 p.m. and 8 a.m). EB is responsible for enforcing TCPA’s provisions and the commission’s rules and orders. EB handles three major areas of enforcement: local competition, public safety and homeland security, and consumer protection. Enforcement officials said that they follow FCC’s guidance on how to prioritize these responsibilities, and that these priorities can change as required by circumstances. EB’s Telecommunications Consumers Division is responsible for considering junk fax complaints for investigation and enforcement. EB uses several procedures to select complaints for investigation and possible enforcement. EB’s formal enforcement actions consist of several sequential steps. First, EB issues a citation, which notifies the faxer of the complaint(s) against it and informs the faxer that its alleged activity is illegal. The citation also states that further such activity could make the faxer subject to a forfeiture action. If FCC receives additional complaints against the faxer for violations of the junk fax rules and substantiates the complaints, EB may pursue the forfeiture action. This could lead to the involvement of the Department of Justice (DOJ), which is responsible for collection. Figure 1 depicts FCC’s process for responding to junk fax complaints. In 2000, FCC recorded about 2,200 junk fax complaints; in 2005, that number had grown to more than 46,000. Despite this growth in junk fax complaints, the numbers of investigations and enforcement actions have generally remained the same. In 2000, CGB began using a new database to record the various types of consumer complaints under FCC’s jurisdiction, including complaints about TCPA violations. For junk fax complaints, CGB staff accept the complaints; enter information into the database; and scan the materials submitted with the complaints, including copies of the alleged junk faxes. CGB staff mail a letter to the majority of complainants acknowledging FCC’s receipt of their complaint. The letter states that FCC does not resolve individual complaints and cannot award monetary or other damages directly to the complainant. The letter also states that the complainant has the right to take private legal action against any junk fax violator. In 2000, FCC recorded about 2,200 complaints; in 2005, that number had grown to more than 46,000. Since 2002, FCC has reported quarterly on the number of consumer complaints received, consolidating all six types of TCPA complaints into one category. As a result, the number of junk fax complaints has never been separately reported. Using CGB data, we found that within the specific category of TCPA complaints, junk faxes represented over 85 percent of the complaints logged in 2005. In fact, when looking at all types of reported consumer complaints, junk fax complaints have ranked as the second most frequently reported since 2003—second only to complaints about indecency and obscenity in radio and television broadcasting. Appendix II lists the number of complaints reported publicly by FCC, by type, from 2003 through 2005 and details the percentage of the TCPA complaints that are junk fax complaints. Both individual consumers and businesses can report junk fax complaints to the commission by e-mail, postal mail, fax, telephone, or the Internet (using an on-line complaint form—Form 475—that appears on FCC’s Web site). FCC documents both the type of complainant (individual consumer or business) and the method of reporting (e-mail, postal mail, fax, telephone, or the Internet). Figure 2 shows the number of junk fax complaints that businesses and individuals reported through various methods from 2003 through 2005. As the figures indicate, the number of junk fax complaints reported by businesses dropped in 2005, but the number of complaints reported by individuals increased, bringing the total for both groups significantly higher in 2005 than in prior years. Additionally, the number of complaints reported using the on-line complaint form has increased, especially for individual consumers. In 2005, about half of all junk fax complaints were reported via the Internet. In their junk fax complaints to FCC, individuals and businesses often described the adverse effects of junk faxes. We looked at hundreds of complainant comments received from September through December 2005 and found complaints that cited the costs of toner and paper, the disruption of business activities during junk fax transmissions, and interruptions to personal lives. For example, the complainants expressed frustration about calls coming in the middle of the night and waking them up or causing panic. FCC has recently addressed this issue. Some complainants noted problems with the opt-out number—that is, the telephone number that they should be able to call to stop receiving the faxes. For example, the opt-out number did not work, was always busy, or was connected to a prerecorded message. According to some complainants, calling the opt-out number seemed to increase the number of junk faxes they received. Additionally, some complainants expressed frustration with the commission’s response to their prior complaints. Some complainants described junk faxes they had received as unbelievable or potentially fraudulent. Among the frequently cited topics were hot stocks, cheap vacations, low-interest mortgages, and low-cost health care. We asked FCC officials whether they believed fraud was an issue with junk faxes. They said that, although enforcement related to fraud falls outside of FCC’s jurisdiction, some of the faxes advertising stock tips could be fraudulent and come under the jurisdiction of the Securities and Exchange Commission (SEC). Federal Trade Commission (FTC) staff, whom we also asked about fraud in connection with junk faxes, said they believed it was a concern and they cited travel and mortgage offers. FTC staff also mentioned pump-and-dump marketing schemes, which they also noted would come under SEC’s jurisdiction. FCC’s EB, established in November 1999, is responsible for investigating and determining the appropriate enforcement action for all types of TCPA complaints, including junk fax complaints. Currently, EB dedicates 11 staff (9 full-time analysts and 2 part-time attorneys) to work on junk fax enforcement. According to EB officials, the bureau’s overall staffing levels have remained relatively stable over the years. As a result, the staffing level for junk fax enforcement has remained about the same over the past 5 years, even though the number of junk fax complaints has rapidly increased. Because of the large number of complaints and limited resources, EB does not investigate each junk fax complaint. Instead, EB officials said, they try to identify and take enforcement action against the major alleged violators and repeat offenders who, they believe, have had the greatest impact on consumers. EB defines a major alleged violator as a company, carrier, or individual that has sent a large number of junk faxes to complainants over a given period of time; it defines a repeat offender as a company, carrier, or individual that continues to violate the junk fax rules after receiving a citation from the commission. To identify major alleged violators, the EB analysts responsible for responding to junk fax complaints first review CGB’s complaint database to identify those complaints with an attached fax. EB officials said they use only complaints with attached faxes because they contain the best evidence for starting an investigation. The analysts then transfer information from the complaint and the fax into an enforcement spreadsheet. Periodically, the EB analysts sort the information in the enforcement spreadsheet to align matching telephone numbers and identify those that are repeated most often. According to enforcement officials, the most frequently repeated telephone numbers are indicative of the major alleged violators that are creating the most widespread problems for consumers. In addition to using EB’s spreadsheet to prioritize which complaints receive enforcement action, EB will also initiate enforcement action on the basis of complaints received from other sources, such as congressional offices, FCC commissioners, or state attorneys general. In the past year, about half of the citations issued by EB were based, at least in part, on referrals from outside sources—the majority of these outside sources were Members of Congress. The next step in the investigation is for the EB analysts to identify the major alleged violators associated with the most frequently repeated telephone numbers. Finding their names and addresses involves contacting carriers to learn who was paying for the telephone numbers from which the alleged junk faxes were sent on the dates the faxes were sent. Waiting for this information from the carriers can take several days. According to enforcement officials, identifying and locating major alleged violators is the most challenging aspect of junk fax enforcement. They said that obtaining this information is becoming increasingly more challenging because violators have become more adept at hiding their identity. As a result, the officials said, the analysts have to spend more time on each investigation. Once a major alleged violator is identified, the analysts can decide whether to begin the formal, two-step enforcement process of citation and possible forfeiture action. EB officials said they consider the citation to be their most efficient enforcement action because many, perhaps the majority, of the violators are unaware that their activities are illegal and could lead to monetary forfeitures. As a result, the officials said, most violators that receive a citation do cease their junk fax activities. However, EB officials could not provide data to support this assertion. EB officials have issued a limited number of citations over the past 6 years, and the annual number did not change substantially, except in 2002. As table 1 indicates, FCC issued 261 citations covering 1,456 junk fax complaints from 2000 through 2005. EB officials cited competing demands, personnel reductions, and the increasing skill of violators in concealing their identity as reasons for the limited number of citations issued. EB officials also noted that in 2005 the average number of complaints that each citation covered increased. They believe this demonstrates EB has successfully targeted the major alleged violators. However, as shown in table 1, the percentage of the total annual number of complaints resulting in a citation has been less than 1 percent since 2003. To identify repeat violators, EB analysts enter citation information into their enforcement spreadsheet, including the telephone numbers of the citation recipients, and search the information in the spreadsheet to identify any complaints sent after the citation date against these recipients. If any such complaints are found, the analysts decide whether to take the second formal enforcement step—a forfeiture action—which begins with the issuance of a notice of apparent liability. This notice informs the alleged repeat violator that its actions make it liable for forfeiture of a specific dollar amount. The notice must be issued within 1 year of the alleged violation(s) that forms the basis for the notice; identify each specific statute, rule, order, term, or condition that allegedly has been violated; explain how the alleged repeat offender’s activities have violated the junk fax rules and the dates of the violations; and specify the amount of the proposed monetary forfeiture. According to EB officials, their enforcement efforts are hampered by the requirement that a notice of apparent liability be issued within 1 year of an alleged repeat violation. For example, FCC’s notice of apparent liability against Fax.Com, Inc., stated that although FCC received some consumers’ correspondences and related declarations detailing additional unsolicited advertisements received from Fax.Com, FCC was unable to include these violations in the forfeiture action because they were beyond the 1-year statute of limitations. This statute of limitations is problematic, they said, because it takes time, after identifying a repeat violation, to prepare the notice and obtain a sworn statement from the complainant verifying that there was no EBR with the sender of the fax. FTC staff said that they have a statute of limitations of at least 5 years to enforce various telemarketing rules by seeking civil penalties, and they agreed with FCC that a 1-year statute of limitations was restrictive. Within a reasonable period of time, usually within 30 days of receiving the notice, the alleged repeat offender must either pay the proposed forfeiture in full or file a written response requesting that the proposed forfeiture be rescinded or be reduced. If the proposed forfeiture penalty is not paid in full in response to the notice, the commission, upon considering all relevant information available to it, will issue an order (1) canceling or reducing the proposed forfeiture or requiring that it be paid in full and (2) stating the date by which the forfeiture must be paid. If the recipient of the order fails to pay the fine within 30 days from the date it is due, EB staff will refer the case to the commission’s Office of General Counsel (OGC). If the recipient ignores OGC’s request for payment, the commission refers the forfeiture order to DOJ for collection. EB officials stated that they have identified eight repeat offenders from the 261 citations issued from 2000 through 2005, and that they have pursued forfeiture actions against all of the repeat offenders they have identified. Six of the eight repeat offenders have received forfeiture orders, as detailed in table 2. The amounts of the monetary forfeitures have increased, but no forfeitures have been collected to date. For various reasons, five of the six forfeitures will never be collected. The sixth forfeiture order accounts for about 78 percent of the fines FCC has levied. Two additional enforcement actions were taken in early 2006, outside the scope of our review. The remaining two forfeiture actions that EB began are against Elf Painting and Wallpaper (Elf) and First Choice Healthcare, Inc. (First Choice). FCC issued a notice of apparent liability to Elf in December 2004 for continuing to send junk faxes after receiving a citation in February 2003. The notice proposed a penalty of $22,500 for five specific violations of the junk fax rules. A final forfeiture order issuing a fine of $22,500 was released by FCC against Elf on March 10, 2006. In February 2006, FCC issued a notice of apparent liability against First Choice, proposing a fine of $776,500 against the company for sending at least 98 unsolicited fax advertisements after receiving a citation in July 2004. The strengths of FCC’s junk fax procedures are that CGB has emphasized both customer service and documentation of consumers’ complaints; however, these processes are resource-intensive and susceptible to error. Additionally, CGB’s database contains detailed information about complaints, but does not present the information in a way that meets EB’s enforcement needs. While EB’s approach to making investigation and enforcement decisions is designed to make efficient use of limited enforcement resources, it does not consider or factor in the majority of complaints. CGB has emphasized customer service by establishing multiple methods for consumers to report junk fax complaints to FCC, providing multiple sources of information about junk fax issues, and sending a letter in response to the majority of the junk fax complaints. As previously discussed, consumers can report junk fax complaints by postal mail, telephone, fax, e-mail, and the Internet. FCC also staffs two consumer centers to handle consumer inquiries and provide junk fax guidance. This guidance is located in several places, including FCC’s Web page, a consumer fact sheet, and the Internet consumer complaint form (Form 475). The letter that FCC sends in response to complaints further advises consumers of their legal options for addressing their complaints. CGB consolidates and maintains information about complaints in its database, together with any attachments. According to CGB officials, the database has improved CGB’s coding and counting of TCPA complaints. However, entering complaint information into the database is time-consuming. Data from complaints reported by postal mail, e-mail, fax, and telephone must be entered manually, while data reported on FCC’s Internet complaint form (Form 475) can be electronically transferred from the form to the database. However, CGB staff still have to review the complaint summary from the consumer’s complaint (by opening a text box from the Form 475) to determine what type of TCPA violation is being reported. As figure 3 shows, the form includes a text box that asks the complainant, among other things, to describe the type of violation. CGB staff then have to analyze the consumer’s comment and manually code the type of TCPA violation into the database. Besides being time-consuming, CGB’s data entry processes may cause errors in the database, despite the periodic supervisory review that CGB officials told us takes place. For example, errors can occur in coding complaints, matching complaints with associated attachments, and dating complaints. These problems may, to an unknown extent, affect the reliability of CGB’s complaint counts. They also may impact the quality of the report that FCC is now required to provide to Congress on the number of junk fax complaints received each year. Given the large numbers of complaints, we do believe that overall trends can be reported, but the specific numbers may not be accurate. Errors in coding complaints can occur if the complainant’s comments on the Form 475 do not provide CGB staff with sufficient information to determine what type of violation should be coded in the database, or if the CGB staff simply miscode a comment. In a cursory review of 2005 complaint data, we found several instances in which an Internet complaint was miscoded. For example, CGB’s database incorrectly identified one Internet complaint as a junk fax complaint, even though the complainant was asking for assistance in having charges removed that resulted from unsolicited advertisements sent as text messages to the complainant’s cellular telephone. Errors may also occur in matching complaints submitted by telephone, e-mail, or the Internet with the associated faxes sent to FCC separately by postal mail. Unless the consumer writes on the fax the unique identifier that CGB assigns to every complaint on the fax and CGB staff scan the fax into the database with the original complaint, the fax may be entered into the database as a new complaint. CGB officials acknowledged that these types of errors could be occurring, but they could not estimate the extent of the problem. Although CGB’s database contains detailed information about complaints, the database does not present the information in a way that meets EB’s enforcement needs. According to EB officials, CGB’s database does not meet EB’s enforcement needs because it does not contain separate fields for all of the information EB requires, and not all fields of the database can be easily searched. For example, the database does not contain separate fields for the names of the businesses or individuals that may have sent the junk faxes or for their telephone numbers. Most of this information, if included in the complaint, has been entered into a comment field manually by CGB staff or transferred electronically from a text box on the Form 475. To find the most frequently reported businesses or individuals (major alleged violators), EB staff would have to use the “Find” feature to search the comment fields for one name or telephone number at a time. Because CGB’s database does not contain the data fields that EB needs for enforcement, EB has developed a separate spreadsheet that contains the requisite data fields and allows the data to be searched and sorted to support EB’s enforcement activities. This spreadsheet is not linked in any way to CGB’s database. Consequently, EB analysts manually enter the data they need from CGB’s database and from the faxes scanned in as attachments to CGB’s database. Furthermore, since the type of attachment is not identified in the database, EB analysts have to open each attachment to determine whether it is a fax. According to EB officials, the 9 EB analysts who work on junk fax complaints spend about half their time on data entry and the remainder of their time on enforcement activities. This duplication of data management activities demonstrates that limited coordination has taken place between CGB and EB in determining how best to manage junk fax complaint data. For example, CGB staff currently have no follow-up procedures to obtain any additional information from junk fax complainants that may assist in investigations and enforcement. In addition, EB staff acknowledged that maintaining a separate spreadsheet takes resources away from investigation and enforcement. EB’s practice is to investigate and consider taking enforcement action only when a fax is provided with a complaint. As previously noted, according to EB officials, a fax is not needed to issue a citation but may be needed for other formal enforcement actions. EB staff enter data into their spreadsheet only for those complaints from CGB’s database that have an attached fax. As figure 4 indicates, the majority of the junk fax complaints in CGB’s database for every year from 2003 through 2005 did not have an attachment. The remaining complaints had an attachment that may or may not have been a fax. For 2005, about 60 percent of the complaints—including almost all of the complaints reported via the Internet—did not have an attachment and, therefore, under EB’s practice, would not have been included in EB’s enforcement spreadsheet. As a result, EB would not have included these complaints in its searches for major alleged violators or repeat offenders or considered them in its decisions about investigation or enforcement. With the majority of reported complaints excluded from EB’s review, the chances of identifying repeat offenders—those who have already received a citation or a notice of apparent liability from FCC but have continued to send junk faxes—are more limited. We searched CGB’s 2005 complaint data for selected company names and telephone numbers from issued citations, using the “Find” feature, and found several complaints alleging violations by citation recipients dated after the citations were issued. However, none of these complaints had an attachment, and we did not find these repeat offenders when we searched EB’s spreadsheet. In addition, we found six complaints of violations by Elf Painting and Wallpaper that postdated the notice of apparent liability issued to this firm in December 2004. The most recent complaint was dated November 2005. However, these complaints were all reported via the Internet and lacked an attachment; therefore, like the 2005 complaints we found against the other citation recipients, they may not have been found in a search of EB’s spreadsheet. Compounding this problem is FCC’s consumer guidance on submitting junk fax complaints. Some of this guidance encourage consumers to send in the junk faxes they have received. However, none of the guidance state that without a fax, EB analysts do not review a complaint, include it in their investigations, consider it for enforcement action, or include it in their searches for repeat offenders. “If you have received unsolicited faxes, you are encouraged to contact the FCC regarding the incident(s). You may need to provide documentation in support of your complaint, such as copies of the fax(es) you received.…Your complaint should include:...a copy of the fax advertisement, if possible, or confirmation that you have retained a copy of the fax.…” By contrast, the form for reporting complaints via the Internet says nothing about sending in a copy of the fax to FCC and does not tell complainants how to do so. As shown in figure 2, the Form 475 is designed for consumers to report a wide variety of telephone complaints. As a result, much of the information the form provides, as well as the information it seeks from consumers, does not apply to junk fax complaints. Only the last section of the form applies to junk fax complaints. Our review of a portion of CGB’s 2005 complaint data revealed that several consumers who reported junk fax complaints via the Internet were frustrated because they could not attach the faxes they had received to the form and could not find any guidance on how to send the faxes to FCC. For example, some consumers said they had kept copies of the faxes but did not know how to send them to FCC. Other consumers asked FCC to contact them to let them know how to send the faxes. Both CGB and EB officials said they do not explicitly state that a fax is needed for enforcement because they do not want to discourage consumers who no longer have the fax from sending in a complaint. In some instances, consumers who received a junk fax may not have kept the fax. In addition, CGB officials said the Form 475 asks for all of the information from the fax that is useful for EB to consider for possible investigatory action or to issue a citation, such as the telephone number of the company or individual that sent the fax and the “opt-out” numbers provided on the fax. However, enforcement officials will not see this information because, under current practice, they are only looking for complaints that have an attached fax to transfer to the EB spreadsheet, regardless of how complete the information is on the Form 475. Congress passed the Government Performance and Results Act of 1993 (GPRA) to require federal agencies to take specific steps to improve their performance. In general, GPRA sets forth recognized performance management practices that agencies can apply in carrying out their governmental responsibilities. These practices include establishing long-term strategic goals and annual goals, measuring performance in meeting these goals, and reporting publicly on the agency’s progress. These performance management practices are critical in helping an agency determine how well it is achieving intended outcomes. FCC does not appear to be applying this model to its junk fax monitoring and enforcement activities and, therefore, lacks an important tool for assessing and reporting its progress. The agency has not indicated, for example, whether its focus is to decrease the number of junk fax complaints received, increase the number of formal enforcement actions, or improve consumer guidance on how to stop junk faxes. FCC’s strategic goal includes a target for reducing the amount of time it takes to respond to consumer complaints; however, this goal may be encouraging FCC to shift its focus from monitoring and enforcement to customer service. CGB officials maintained, for example, that they generally send a letter to complainants within 2 to 3 days acknowledging that FCC has received their complaint. While this letter provides complainants with information on alternative enforcement mechanisms under the law—that is, their private right of action and a civil action brought by their state attorney general—it does not seek additional information from them, if needed, to pursue an FCC enforcement action. Furthermore, once CGB has responded to a complaint with the acknowledgment letter, it codes the complaint as a closed case for CGB purposes, meaning that these data can be purged from the database after 2 years. As a result, these data are no longer available for use in identifying major alleged violators and repeat offenders or for identifying and monitoring trends in complaints and assessing the effects of enforcement actions. FCC is not using the information on junk fax complaints that it collects to measure its performance in carrying out its junk fax responsibilities. Although CGB allocates considerable staff and other resources to entering complaint data into its database, FCC is not analyzing these data and using the results of its analyses to set priorities and allocate resources. For example, it is not monitoring the number of junk fax complaints recorded each year. Thus, FCC’s quarterly reports identify the total number of TCPA complaints, but do not break out the total for each of the six types of TCPA complaints. As a result, the quarterly reports mask the magnitude of the junk fax problem, which, as our analysis indicates, accounts for about 85 percent of all TCPA complaints received in 2005. In addition, the reports do not indicate that junk fax complaints are the second most frequently recorded type of consumer complaint overall. Without analyzing the data it collects to determine the relative frequency of junk fax and other types of complaints, FCC is limited in its ability to determine whether its staff and other resources are appropriately aligned to address the problems consumers are experiencing. Additionally, FCC is not analyzing the nature of the principal types of junk fax problems complainants are reporting. This information appears in a comment field, where CGB staff enter comments provided by complainants, but the information cannot be analyzed electronically. As a result, FCC may not be able to fully address concerns such as the percentages of complainants who reported that they were continuing to receive junk faxes after calling the opt-out number or who were receiving junk faxes in the middle of the night. Furthermore, FCC cannot identify and monitor trends in complaints and enforcement and, therefore, cannot target its resources to complainants’ greatest concerns or evaluate its own performance in addressing those concerns. Having information on the nature and frequency of problems with opt-out numbers and FCC’s success in addressing these problems is particularly important because Congress, in the 2005 Act, required the opt-out number to protect consumers from repeated unwanted faxes. FCC officials stated that these issues will be addressed once the new junk fax rules are implemented. Without analysis, FCC cannot explore the need for, or implement, changes to its rules, procedures, or consumer guidance that might help deter junk fax violations or give consumers a better understanding of the junk fax rules. We found, for example, from our review of comments in CGB’s database from 2005, that many complainants seemed to believe the National Do-Not-Call Registry applies to fax numbers as well as their home telephone numbers. Repeatedly, complainants reported that they had asked to have their fax numbers placed on this list, and they did not understand why they were still receiving junk faxes. FTC, together with FCC, implemented this list in 2003 to protect consumers from unwanted telemarketing calls. FTC staff explained that the list does not apply to fax numbers—that is, telemarketers must consult this list before placing covered calls to consumers, but senders of fax advertisements are not required to consult the list before faxing. FTC has provided guidance to consumers that fax numbers are not covered under the National Do-Not-Call Registry. Despite the many comments in CGB’s database indicative of complainants’ misunderstanding, FCC has not considered this issue in a rulemaking context or revised all of its guidance to clarify whether the National Do-Not-Call Registry is applicable to fax advertising. Most important, without establishing performance goals and measures and without analyzing complaint and enforcement data, it is not possible to explore the effectiveness of current enforcement measures. Without first gaining an understanding of the effectiveness of current enforcement measures, it is similarly not possible to determine whether additional enforcement measures are necessary to protect consumers. Consumer frustration with junk faxes is evident in the rapidly increasing number of complaints and in the time that consumers are willing to take to seek relief from this type of unsolicited advertising. FCC has provided consumers with several methods to submit their complaints about junk faxes and several sources of consumer information about junk faxes, and it promptly acknowledges receipt of most of the complaints. However, despite collecting thousands of junk fax complaints, including the information submitted with them, FCC has taken formal enforcement actions against relatively few junk faxers. More important, FCC is simply not considering the majority of complaints or any of the information contained in those complaints when making decisions about investigations and enforcement. We acknowledge that FCC cannot be expected to take enforcement action against every junk fax complaint received. The growth in complaints, together with limited resources, would make such an effort both impossible and impractical. However, FCC has put in place data collection and management processes that contain clear inefficiencies and limit its ability to target major alleged violators and repeat offenders. Overall, there has been limited collaboration between CGB and EB to ensure that FCC’s data processes are efficient, make the fullest use of the data collected, and fully support the needs of EB. FCC is not making use of performance management tools to improve its junk fax enforcement. There are no goals or measures of success for handling complaints or for investigating them and taking enforcement action. More fundamentally, FCC has not done the analysis that would help it to establish such goals and measures. Without analyzing the complaint data, FCC does not know if it could be doing more to better target its limited resources to address the concerns of consumers, such as seeking out faxers that may be providing fake opt-out numbers or providing clearer guidance to consumers on the impact of time restrictions and the National Do-Not-Call Registry on junk fax concerns. FCC also has not established what it needs to do to be able to completely and accurately report the number of complaints it has received in carrying out its junk fax responsibilities as required under the 2005 Act. Because FCC’s junk fax enforcement efforts have data management issues, lack data analysis, and lack performance goals and measures, it is not possible to determine whether any additional enforcement measures would better protect consumers and businesses from receiving junk faxes. FCC simply cannot say whether its junk fax enforcement efforts are successful in combating junk fax advertising. However, the steady number of citations issued from year to year should be cause for concern in the face of the rising number of junk fax complaints. FCC’s current consumer guidance does not alert consumers to the necessity, under FCC’s current practice, of submitting a copy of the junk fax(es) along with the complaint. Because this impacts the number of complaints that FCC takes into consideration when searching for major alleged violators and repeat offenders, we recommend that the Chairman of the Federal Communications Commission direct staff to take the following two actions: Revise consumer complaint guidance to make it clear to consumers that they need to include a copy of the fax in order to make it possible for any investigation or enforcement action to take place. This includes revising the wording of the Consumer Fact Sheet, the Internet complaint form (Form 475), the consumer center script, and any other junk fax guidance provided to consumers. Revise the Form 475 so that it includes clear instructions for complainants on how to submit a copy of the fax. This may include developing procedures and instructions to let consumers know how to electronically attach a scanned copy of the fax so that it accompanies their complaint form. FCC’s current data collection and management processes contain inefficiencies and adversely affect FCC’s procedures for targeting major alleged violators and repeat offenders. To begin to address these problems, we recommend that FCC take the following action: Direct consumer and enforcement staff to develop data management strategies to (1) make the consumer complaint database more usable for FCC’s staff and (2) mitigate the amount of time spent on manual data entry, as well as possible errors, resulting from this manual data entry. For example, these efforts could include, but not be limited to, revising the Form 475 so that consumers identify through checked boxes, or another similar method, the type of complaint they are filing. This could enhance accuracy and improve staff efficiency by eliminating the need for FCC staff to read a text box to identify the type of complaint and then enter that information into the database. In addition, staff should develop strategies that would enable enforcement staff to search all consumer complaint information contained in the database to identify major and repeat violators. Finally, FCC should introduce recognized performance management practices into its operations in order to improve the performance of its junk fax enforcement efforts. Toward this effort, FCC should take the following three actions: Establish goals and performance measures for receiving, acknowledging, investigating, and taking enforcement actions on junk fax complaints. Use the information in the complaint database to analyze the nature and scope of the complaints. FCC can then begin to determine whether its current enforcement efforts are sufficient in combating junk faxers, and whether any additional enforcement mechanisms might be needed to protect consumers. Evaluate whether its staff and other resources are appropriately aligned to carry out its junk fax responsibilities. This could include, but not be limited to, evaluating the benefits of targeting staff resources to issue more citations that could prompt more violators to cease their offending behavior. We provided a draft of this report to FCC for comment. Senior officials from the commission’s Enforcement and Consumer & Governmental Affairs Bureaus provided oral comments. FCC generally concurred with our recommendations and noted that they have already begun taking steps to address our recommendations. For example, FCC officials stated staff have been working to implement a new data management system that will in part consolidate all inquiry and complaint data into a new database by May 2006. FCC officials said this new database will identify possible duplicate complaint records and increase the efficiency of processing junk fax inquiries and complaints. They also said discussions on developing additional modifications to the new database are now under way, including modifications that would eliminate the need for EB to have its own enforcement spreadsheet. In the interim, FCC officials said CGB and EB staff are planning to link the EB spreadsheet to the new database, but the officials could not provide a workplan describing how and when this linkage would be accomplished. FCC officials said they take issue with our conclusion that FCC’s current process for prioritizing junk fax complaints for enforcement may not identify the major alleged violators and repeat offenders. FCC believes that the number of complaints transferred to EB’s spreadsheet for review, although only a portion of the total number of complaints received, is large enough to identify the major alleged violators and repeat offenders. We reiterate that EB’s spreadsheet contains less than half of the total number of junk fax complaints received and contains almost none of the Internet complaints. FCC has done no analysis to determine whether the complaints that have been excluded from enforcement consideration involve the same entities they have identified as major alleged violators. Moreover, searching for repeat offenders (junk fax violators that have already been warned by FCC to cease their activities) using a subset of the complaints received is not as effective since even one additional violation makes the entity subject to further enforcement action, including monetary forfeiture. Since FCC is beginning to explore changes to its database to eliminate the need for a separate EB spreadsheet, as previously noted, it is possible for FCC to also explore changes to the database that would improve EB’s ability to analyze all complaint data to better identify the major alleged violators, as we have recommended. Improved search functions within the database would also aid in identifying the repeat offenders. FCC officials also said the agency had included a consumer protection goal that covered junk fax issues in the agency’s 2004 performance summary. FCC officials also provided us with 2004 and 2005 CGB goals. However, after reviewing these documents, we maintain that FCC does not have goals or measures specifically related to junk fax enforcement. We reiterate that the introduction of recognized performance management practices into FCC’s operations could improve the performance of its junk fax enforcement efforts. FCC also provided technical comments that were incorporated throughout this report as appropriate. We are sending copies of this report to interested congressional committees and the Chairman, FCC. We will make copies available to others upon request. The report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me on (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Faye Morrison, Assistant Director; Kimberly Berry; Elizabeth Eisenstadt; Edda Emmanuelli-Perez; Chad Factor; Michele Fejfar; Mike Mgebroff; Josh Ormond; Terri Russell; and Mindi Weisenbloom. This appendix provides a brief description of how unsolicited advertisements provided through commercial telephone calls and e-mails are regulated and how the regulations are enforced. In response to consumer frustration and dissatisfaction with advertising via unsolicited telephone calls and e-mails, Congress has passed several statutes directing the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC) to regulate unsolicited advertisements delivered by telephone or e-mail. The Telephone Consumer Protection Act of 1991 is FCC’s basic statutory mandate with respect to telemarketers and applies to unwanted telemarketing calls and facsimile (fax) solicitations. The Telemarketing and Consumer Fraud and Abuse Prevention Act of 1994 is FTC’s specific statutory mandate regarding telemarketing. The Controlling Assault of Non-Solicited Pornography & Marketing Act of 2003 (CAN-SPAM Act) provides FTC with the authority to regulate commercial e-mails whose “primary purpose” is the “commercial advertising or promoting of a commercial product or service.” FCC has authority under the CAN-SPAM Act to regulate unsolicited commercial messages on wireless devices. Thus, FCC’s and FTC’s enforcements are based upon different statutory authority. FCC’s enforcement efforts are generally accomplished through an administrative process. FTC’s enforcement actions are usually filed in federal district court and seek injunctive relief; consumer redress; and, in some circumstances, civil penalties. The latter actions are filed by the Department of Justice (DOJ) on behalf of FTC. Both commissions can obtain civil penalties up to $11,000 per violation. The Telephone Consumer Protection Act of 1991 (TCPA) was created in response to consumer concerns about the growing number of unsolicited telemarketing calls to their homes and the increasing use of automated and prerecorded messages. FCC’s rules under that act prohibit telephone solicitation calls to homes between the hours of 9 p.m. and 8 a.m. Also, under the rules, anyone making a call to a home, must provide his or her name, the name of the person or entity on whose behalf the call is being made, and a telephone number or address at which the person or entity may be contacted. These telemarketing rules do not apply to calls or messages placed with a consumer’s prior expressed permission, by or on behalf of a tax-exempt nonprofit organization, or from a person or organization with whom the consumer has an established business relationship (EBR). TCPA telephone solicitation violations are enforced in the same manner as TCPA junk fax violations. The purpose of the Telemarketing and Consumer Fraud and Abuse Prevention Act of 1994 was to combat telemarketing fraud by providing law enforcement agencies with new tools and to give consumers new protections. The act directed FTC to issue a rule prohibiting deceptive and abusive telemarketing acts or practices, and specified, among other things, certain acts or practices FTC’s rule must address, including “…unsolicited telephone calls which the reasonable consumer would consider coercive or abusive of such consumer’s right to privacy.” FTC issued its original Telemarketing Sales Rule (TSR) in 1995. TSR requires certain disclosures and prohibits misrepresentations. Some of the provisions of the rule will include the following: (1) the rule restricts calls to the hours between 8:00 a.m. and 9:00 p.m.; (2) the rule forbids telemarketers from calling consumers if they have been asked not to call; and (3) the rule requires certain prompt disclosures, prohibits certain misrepresentations and lying to get consumers to pay, and makes it illegal for a telemarketer to withdraw money directly from a checking account without the account holder’s specific, verifiable authorization. The TSR rule was amended in 2003. The amended TSR established the National Do-Not-Call Registry. In addition, the amended TSR places restrictions on unauthorized billing, reduces abandoned calls, and requires caller identification transmissions. Several types of calls are expressly exempted from TSR coverage, including calls initiated by consumers in response to direct mail (provided certain disclosures are made), calls initiated by consumers in response to advertisements in the general media (such as newspapers or media), and business-to-business calls. Catalog sales calls also are exempt. Under the statute, violations of TSR are treated as “unfair or deceptive acts or practices in violation of the FTC Act.” FTC’s enforcement actions generally are accomplished by seeking injunctive relief and consumer redress. Under some circumstances (e.g., do-not-call violations), injunctions and sometimes civil penalties (up to $11,000 per violation) are sought. Actions seeking civil penalties are filed by DOJ on behalf of FTC and are less common. FTC itself files and litigates its actions seeking injunctive relief and consumer redress. States, through their attorneys general, may bring civil actions on behalf of their residents to enjoin the violation; enforce compliance with TSR; obtain damages, restitution, or other compensation on behalf of residents; and obtain such other relief as the court may deem appropriate. Private parties may also bring a civil action within 3 years after discovery of the violation, if the amount in controversy exceeds the sum or value of $50,000 in actual damages for each person adversely affected by such telemarketing. Such an action may be brought to enjoin such telemarketing, enforce compliance with any rule, obtain damages, or obtain such additional and other relief as the court may deem appropriate. In January 2002, FTC proposed a National Do-Not-Call registry. One year later, FTC amended its TSR to create the national registry and prohibit covered telemarketing calls to consumers who registered their telephone numbers. FCC revised its regulations pursuant to TCPA in June 2003, requiring telemarketers under its jurisdiction to comply with the requirements of the national registry. In March 2003, Congress passed the Do-Not-Call Implementation Act, which authorized FTC to establish fees “sufficient to implement and enforce” the national registry. In September 2003, in response to legal challenges to the national registry and requirements, Congress passed additional legislation (1) expressly authorizing FTC to implement and enforce a National Do-Not-Call Registry under the Telemarketing and Consumer Fraud and Abuse Prevention Act and (2) ratifying the National Do-Not-Call Registry regulation as promulgated by FTC in 2002. Under FTC’s and FCC’s rules, the registry covers both traditional (wired) and mobile (wireless) telephones. The registry is national in scope, applies to all telemarketers (with the exception of certain nonprofit organizations), and covers both interstate and intrastate telemarketing calls. Commercial telemarketers are not allowed to call a consumer if his or her telephone number is on the registry, unless there is an EBR between the seller and the consumer or the consumer has given prior written consent to be called. Nontelemarketing calls, such as political fundraising, market research surveys, or debt collection, are not prohibited by the registry’s provisions. The national registry started accepting consumer telephone number registrations in late June 2003, and telemarketers began accessing the national registry to obtain registered consumer telephone numbers in September 2003. FTC and FCC began enforcing the provisions of the national registry in October 2003. FTC and FCC have different but overlapping jurisdiction over the activities of entities that make telemarketing calls. FTC’s authority under its telemarketing law is limited to entities engaged in interstate telemarketing, while FCC’s authority covers both intrastate and interstate entities. In addition, by statute, certain entities are wholly or partially exempt from FTC jurisdiction but remain subject to FCC jurisdiction. These entities include common carriers, banks, credit unions, saving and loans institutions, airlines, nonprofit organizations, and insurance companies. FTC and FCC do not take action on every complaint alleging a violation of the national registry provision; rather, they consider a number of factors— such as the number and persistence or duration of complaints filed against a telemarketer, the nature of the claims made by the telemarketer, and any past history of complaints or law violations—to determine whether to take action against a telemarketer for violations of the national registry provision. The CAN-SPAM Act of 2003 establishes requirements for those who send commercial e-mail, spells out penalties for spammers and companies whose products are advertised in spam if they violate the law, and gives consumers the right to ask e-mailers to stop spamming them. The law covers e-mail whose primary purpose is advertising or promoting a commercial product or service. A “transactional or relationship message” (e.g., an e-mail that facilitates an agreed-upon transaction or updates a customer in an EBR) may not contain false or misleading routing information, but otherwise is exempt from most provisions of the CAN-SPAM Act. State laws specifically related to commercial e-mail are preempted. However, state laws that are not specifically applicable to e-mail, such as trespass, contract, tort law, or state laws that relate to fraud or computer crimes, are not preempted. Under the CAN-SPAM Act’s major provisions, false or misleading header information is prohibited. An e-mail’s “From,” “To,” and routing information (including the originating domain name and e-mail address) must be accurate and identify the person who initiated the e-mail. The law prohibits deceptive subject lines and requires that the e-mail give recipients an opt- out method. Specifically, the sender must provide a return e-mail or another Internet-based response mechanism that allows a recipient to request that the sender not send future e-mails to the e-mail address. Senders must honor opt-out requests. Additionally, the act requires that the commercial e-mail be identified as an advertisement and include the sender’s valid physical postal address. FTC (and various other agencies) is authorized to enforce the CAN-SPAM Act. Each violation is subject to fines of up to $11,000 per violation. FTC also responds to deceptive commercial e-mail as a violation of the FTC act. State attorneys general, state law enforcement agencies, and Internet service providers (ISP) may also bring suit under CAN-SPAM for statutorily set damages. In a December 2005 report to Congress, FTC stated that the commission had brought 20 cases alleging violation of the act. The report also noted that at the state level, three attorneys general have filed a total of three actions—one with FTC as a coplaintiff—in federal court, naming 15 defendants under the CAN-SPAM Act. In addition, the report stated that ISPs have also filed CAN-SPAM Act suits initially against more than 100 known defendants and more than 580 unknown (John Doe) defendants. DOJ has the authority to enforce the criminal penalties established under the act. Criminal penalties may include fines or imprisonment. According to the legislative history of the act, aggressive civil and criminal enforcement actions were needed to curb the growth of spam on all fronts. The criminal provisions were targeted to those who use fraudulent and deceptive means to send unwanted e-mail messages. The need for these criminal provisions was based, in part, on a study by FTC that found that 66 percent of spam contained some kind of false, fraudulent, or misleading information, and one-third of all spam contained a fraudulent return e-mail address that was included in the routing information, or header, of the e-mail message. Section 4 of the CAN-SPAM Act criminalized five types of activities in connection with e-mail, set forth the maximum penalties for each type, and called for the U.S. Sentencing Commission to consider new sentencing guidelines. Specifically, the five types of activities are as follows: accessing a protected computer without authorization to send multiple using open relays with intent to deceive in sending multiple commercial using materially false header information in sending commercial e-mail falsely registering e-mail accounts or domain names in connection with sending multiple commercial e-mail messages, and falsely claiming to be the registrant of Internet protocol addresses for sending spam. The criminal penalties fall into three tiers. First, a 5-year statutory maximum applies when the CAN-SPAM violation is in furtherance of any felony under state or federal law, or when the defendant has previously been convicted of an offense under 18 U.S.C. § 1037. Second, a 3-year maximum applies for convictions of hacking into a computer, or to use a computer system that the owner has made available for other purposes, as a conduit for bulk commercial e-mail or for other violations of 18 U.S.C. § 1037 (a) when one of several additional conditions apply. The conditions relate to the measure of the economic gain or loss, the volume of e-mail sent, the number of false registrations used, or whether the defendant had a leadership role in the offense. Finally, a 1-year statutory maximum applies for any other violation of 18 U.S.C. § 1037. In addition, 18 U.S.C. § 1037(c) allows DOJ to seek the criminal forfeiture of both property obtained from spamming profits and the computers used to send the spam. In December 2005, FTC reported to Congress that DOJ had brought four criminal prosecutions under the CAN-SPAM Act, and that numerous other nonpublic investigations were ongoing. Lastly, the CAN-SPAM Act supplements some consumer protections that were already established by TCPA for regulating unwanted text messages and e-mail on mobile devices. Together, the two laws impose limitations on both unsolicited telephone marketing calls and any other calls to a paging service, cellular telephone service, other radio common carrier service, or any service for which the person being called would be charged for the call. Under TCPA rules, a “call” includes text messaging if the messaging is sent to a telephone number rather than an e-mail account. Electronic messages can be sent to mobile devices using a variety of methods. The type of technology used to send the electronic message determines how the electronic message is regulated. The CAN-SPAM Act required that FCC adopt rules to protect consumers from receiving unsolicited mobile service commercial messages. Under the act, a mobile service commercial message is a commercial e-mail message that is transmitted directly to a wireless device that is utilized by a subscriber of commercial mobile service in connection with that service. The act defines an e-mail message as a message having a unique e-mail address that includes a reference to an Internet domain. FCC issued rules in August 2004. FCC adopted a general prohibition on sending commercial messages to any address referencing an Internet domain name associated with wireless subscriber message services. To assist the senders of such messages in identifying those subscribers, FCC requires commercial radio service providers to submit those names to the commission, for inclusion on a public list. FCC pursues violations of both CAN-SPAM and TCPA as it relates to wireless devices under its general enforcement authority. As part of our study, we considered whether additional enforcement measures might be necessary to protect consumers from junk faxes, and whether establishing junk fax penalties and enforcement actions for repeat violators or abusive violations similar to the criminal penalties under CAN-SPAM would have a greater deterrent effect. As explained in the letter of this report, without FCC establishing performance goals and measures and analyzing complaint and enforcement data, it is not possible to explore the effectiveness of current enforcement measures. Without first gaining an understanding of the effectiveness of current enforcement measures, it is similarly not possible to determine whether additional enforcement measures are necessary to protect consumers. We did, however, ask federal government officials, representatives of the state attorneys general, consumer advocates, and business associations for their opinions regarding whether additional enforcement measures are currently necessary to enforce junk fax violations. Those with whom we spoke generally did not believe that additional measures were necessary at this time and did not support imposing criminal sanctions on junk fax violators similar to those imposed on spammers under CAN-SPAM. A few of those with whom we spoke thought that the role of the telephone companies might be expanded, similar to the role of ISPs under the CAN-SPAM Act, so that telephone companies could bring suit against junk faxers using their networks. The Junk Fax Prevention Act of 2005 required GAO to report to Congress on FCC’s enforcement of the junk fax laws. Accordingly, we answered the following questions: (1) What procedures have FCC established for taking action on junk fax complaints—including receipt, acknowledgment, investigation, and enforcement—and to what extent has it taken such action? (2) What are the strengths and weaknesses of FCC’s junk fax procedures? and (3) What challenges do FCC face in carrying out its junk fax responsibilities? To determine FCC’s procedures for taking action on junk fax complaints, we reviewed provisions of TCPA as well as FCC’s rules and procedures for implementing the provisions of the act. We interviewed officials from FCC’s Consumer & Governmental Affairs Bureau (CGB)—whose responsibilities include developing FCC rules and accepting and acknowledging complaints—and FCC’s Enforcement Bureau—whose responsibilities include junk fax enforcement. Additionally, we reviewed FCC’s guidance to complainants for submitting junk fax complaints as well as FCC’s procedures for receiving and documenting these complaints. Finally, we obtained and reviewed FCC’s procedures for determining which complaints would receive further investigative and enforcement actions. To determine the extent to which FCC has taken action on junk fax complaints, we obtained and analyzed FCC’s database for documenting junk fax complaints and the spreadsheet used for determining investigatory and enforcement actions. We obtained summary data on the number of complaints received from 2000 through 2005, by source and method. We also obtained detailed information on the amount of formal enforcement actions taken against junk faxers since the formation of FCC’s Enforcement Bureau. Further, to determine the type of concerns expressed by consumers and businesses, we reviewed some individual consumer and business comments submitted to FCC as part of the junk fax complaints and contained in FCC’s database. To assess the reliability of FCC’s complaint data, we interviewed FCC officials responsible for the database regarding data entry and control procedures and reviewed existing documentation about the system. We conducted limited electronic tests on 2005 data to determine missing data and duplicative complaint identification numbers; these tests revealed only minor problems. We also conducted manual reviews to identify any discrepancies in the database. For example, we reviewed a portion of the comment fields in the database and found that some complaints that were coded as junk fax complaints should not have been. Since this type of review requires reading the comments for each complaint entered, which is resource-intensive, we did not review all of the comments to determine the extent of this problem. CGB officials acknowledged limitations of the data, including reliability problems in previous years of tracking complaint information, possible inaccuracies in coding, and continual changes to more recent data as additional complaints are added. We determined that the data were sufficiently reliable to present overall trends and approximate figures. Specifically, we report only overall complaint numbers for 2000 through 2002, and approximate numbers at a more detailed level for complaints from 2003 through 2005. To determine the strengths and weaknesses of FCC’s junk fax procedures, we analyzed these procedures, including those used to determine which junk fax complaints would be considered for further investigatory and enforcement actions. In addition, we reviewed business and consumer comments submitted to FCC during junk fax rulemaking and reconsideration of existing rules. We also analyzed all junk fax consumer complaint guidance provided by FCC to determine if the guidance was consistent with the enforcement procedures. To determine the challenges FCC faces in carrying out its junk fax responsibilities, we reviewed provisions of the Government Performance and Results Act of 1993, as well as documents and records used by FCC to establish goals and performance measures—that is, budget justifications, performance summaries, and strategic plans. We also reviewed FCC’s quarterly complaint reports to determine the level of analysis being conducted on junk fax complaints. Finally, we used existing statutes and regulations to provide information on additional enforcement measures and penalties that have been established to protect consumers from other types of unsolicited advertising. We interviewed FTC staff, representatives from the National Association of Attorney’s General, and representatives from industry groups to obtain more information on different enforcement rules and actions. We conducted our work from November 2005 through March 2006 in accordance with generally accepted government auditing standards. | The Telephone Consumer Protection Act of 1991 prohibited invasive telemarketing practices, including the faxing of unsolicited advertisements, known as "junk faxes," to individual consumers and businesses. Junk faxes create costs for consumers (paper and toner) and disrupt their fax operations. The Junk Fax Prevention Act of 2005 clarified an established business relationship exemption, specified opt-out procedures for consumers, and requires the Federal Communications Commission (FCC)--the federal agency responsible for junk fax enforcement0--to report annually to Congress on junk fax complaints and enforcement. The law also required GAO to report to Congress on FCC's enforcement of the junk fax laws. This report addresses (1) FCC's junk fax procedures and outcomes, (2) the strengths and weaknesses of FCC's procedures, and (3) FCC's junk fax management challenges. FCC has procedures for receiving and acknowledging the rapidly increasing number of junk fax complaints, but the numbers of investigations and enforcement actions have generally remained the same. In 2000, FCC recorded about 2,200 junk fax complaints; in 2005, it recorded over 46,000. Using its procedures to review the complaints, FCC's Enforcement Bureau (EB) issued 261 citations (i.e., warnings) from 2000 through 2005. EB has ordered six companies to pay forfeitures for continuing to violate the junk fax rules after receiving a citation. The six forfeitures totaled over $6.9 million, none of which has been collected by the Department of Justice for various reasons. EB officials cited competing demands, resource constraints, and the rising sophistication of junk faxers in hiding their identities as hindrances to enforcement. An emphasis on customer service, an effort to document consumers' complaints, and an attempt to target enforcement resources efficiently are the strengths of FCC's procedures; however, inefficient data management, resulting in time-consuming manual data entry, data errors, and--most important--the exclusion of the majority of complaints from decisions about investigations and enforcement, are weaknesses. FCC's guidance to consumers does not provide them with all of the information they need to support FCC's enforcement efforts. FCC faces management challenges in carrying out its junk fax responsibilities. The commission has no clearly articulated long-term or annual goals for junk fax monitoring and enforcement, and it is not analyzing the junk fax data. Without analysis, FCC cannot explore the need for, or implement, changes to its rules, procedures, or consumer guidance that might help deter junk fax violations or give consumers a better understanding of the junk fax rules. Most important, without performance goals and measures and without analysis of complaint and enforcement data, it is not possible to explore the effectiveness of current enforcement measures. |
Drinking water systems vary by size and other factors, but as illustrated in figure 1, they most typically include a supply source, treatment facility, and distribution system. A water system’s supply source may be a reservoir, aquifer, or well, or a combination of these sources. Some systems may also include a dam to help maintain a stable water level, and aqueducts and transmission pipelines to deliver the water to a distant treatment plant. The treatment process generally uses filtration, sedimentation, and other processes to remove impurities and harmful agents, and disinfection processes such as chlorination to eliminate biological contaminants. Chemicals used in these processes, most notably chlorine, are often stored on site at the treatment plant. Distribution systems comprise water towers, piping grids, pumps, and other components to deliver treated water from treatment systems to consumers. Particularly among larger utilities, distribution systems may contain thousands of miles of pipes and numerous access points. Nationwide, there are more than 160,000 public water systems that individually serve from as few as 25 people to 1 million people or more. As figure 2 illustrates, nearly 133,000 of these water systems serve 500 or fewer people. Only 466 systems serve more than 100,000 people each, but these systems, located primarily in urban areas, account for early half of the total population served. Until the 1990s, emergency planning at drinking water utilities generally focused on responding to natural disasters and, in some cases, domestic threats such as vandalism. In the 1990s, however, both government and industry officials broadened the process to account for terrorist threats. Among the most significant actions taken was the issuance in 1998 of Presidential Decision Directive 63 to protect the nation’s critical infrastructure against criminal and terrorist attacks. The directive designated the Environmental Protection Agency (EPA) as the lead federal agency to address the water infrastructure and to work with both public and private organizations to develop emergency preparedness strategies. EPA, in turn, appointed the Association of Metropolitan Water Agencies to coordinate the water industry’s role in emergency preparedness. During this time, this public-private partnership focused primarily on cyber security threats for the several hundred community water systems that each served over 100,000 persons. The partnership was broadened in 2001 to include both the drinking water and wastewater sectors, and focused on systems serving more than 3,300 people. Efforts to better protect drinking water infrastructure were accelerated dramatically after the September 11 attacks. EPA and the drinking water industry launched efforts to share information on terrorist threats and response strategies. They also undertook initiatives to develop guidance and training programs to assist utilities in identifying their systems’ vulnerabilities. As a major step in this regard, EPA supported the development, by American Water Works Association Research Foundation and Sandia National Laboratories, of a vulnerability assessment methodology for larger drinking water utilities. The push for vulnerability assessments was then augmented by the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 (Bioterrorism Act). Among other things, the act required each community water system serving more than 3,300 individuals to conduct a detailed vulnerability assessment by specified dates in 2003 or 2004, depending on their size. Since we issued our report in October, several Homeland Security Presidential Directives (HSPDs) were issued that denote new responsibilities for EPA and the water sector. HSPD 7 designates EPA as the water sector’s agency specifically responsible for infrastructure protection activities, including developing a specific water sector plan for the National Infrastructure Protection Plan that the Department of Homeland Security must produce. HSPD 9 directs EPA to develop a surveillance and monitoring program to provide early warning in the event of a terrorist attack using diseases, pests, or poisonous agents. EPA is also charged, under HSPD 9, with developing a nationwide laboratory network to support the routine monitoring and response requirements of the surveillance program. HSPD 10 assigns additional responsibilities to EPA for decontamination efforts. To obtain information for our analysis, we conducted a three-phase, Web based survey of 43 experts on drinking water security. In identifying these experts, we sought to achieve balance in terms of area of expertise (i.e., state and local emergency response, engineering, epidemiology, public policy, security and defense, drinking water treatment, risk assessment and modeling, law enforcement, water infrastructure, resource economics, bioterrorism, public health, and emergency and crisis management). In addition, we attempted to achieve participation by experts from key federal organizations, state and local agencies, industry and nonprofit organizations, and water utilities serving populations of varying sizes. To obtain information from the expert panel, we employed a modified version of the Delphi method. The Delphi method is a systematic process for obtaining individuals’ views and seeking consensus among them, if possible, on a question or problem of interest. Since first developed by the RAND Corporation in the 1950s, the Delphi method has generally been implemented using face-to-face group discussions. For this study, however, we administered the method through the Internet. We conducted our work in accordance with generally accepted government auditing standards between July 2002 and August 2003. Our panel of experts identified several key physical assets of drinking water systems as the most vulnerable to intentional attack. In general, their observations were similar to those of public and private organizations that have assessed the vulnerability of these systems to terrorist attacks, including the National Academy of Sciences, Sandia National Laboratories, and key industry associations. In particular, as shown in figure 3, nearly 75 percent of the experts (32 of 43) identified the distribution system or its components as among the top vulnerabilities of drinking water systems. Experts also identified overarching issues compromising how well these assets are protected. Chief among these issues are (1) a lack of redundancy in vital systems, which increases the likelihood that an attack could render a system inoperable; and (2) the difficulty many systems face in understanding the nature of the threats to which they are exposed. I would first like to discuss the distribution system, since it was cited most frequently as a key vulnerability by our panelists. The distribution system delivers drinking water primarily through a network of underground pipes to homes, businesses, and other customers. While the distribution systems of small drinking water utilities may be relatively simple, larger systems serving major metropolitan areas can be extremely complex. One such system, for example, measures water use through 670,000 metered service connections, and distributes treated water through nearly 7,100 miles of water mains that range from 2 inches to 10 feet in diameter. In addition to these pipelines and connections, other key distribution system components typically include numerous pumping stations, treated water storage tanks, and fire hydrants. In highlighting the vulnerability of distribution systems, our panelists most often cited their accessibility at so many points. One expert, for example, cited the difficulty in preventing the introduction of a contaminant into the distribution system from inside a building “regardless of how much time, money, or effort we spend protecting public facilities.” Experts also noted that since the water in the distribution system has already been treated and is on the way to the consumer, the distribution of a chemical, biological, or radiological agent in such a manner would be virtually undetectable until it was too late to prevent harm. While research on the fate and transport of contaminants within water treatment plants and distribution systems is under way, according to one expert, limited technologies are readily available that can detect a wide range of contaminants once treated water is released through the distribution system for public use. Several other components, though not considered as critical as the distribution system, were still the subject of concern. Nearly half the experts (20 of 43) identified source water as among drinking water systems’ top vulnerabilities. One expert noted, for example, that “because of the vast areas covered by watersheds and reservoirs, it is difficult to maintain security and prevent intentional or accidental releases of materials that could have an adverse impact on water quality.” Yet some experts cited factors that mitigate the risks associated with source water, including (1) the source water typically involves a large volume of water, which in many cases could dilute the potency of contaminants; (2) the length of time (days or even weeks) that it typically takes for source water to reach consumers; and (3) the source water will go through a treatment process in which many contaminants are removed. Also cited as vulnerabilities were the sophisticated computer systems that drinking water utilities have come to rely upon to manage key functions. These Supervisory Control and Data Acquisition (SCADA) systems allow operators to monitor and control processes throughout their drinking water systems. Although SCADA systems have improved water utilities’ efficiency and reduced costs, almost half of the experts on our panel (19 of 43) identified them as among these utilities’ top vulnerabilities. Thirteen of the 43 experts identified treatment chemicals, particularly chlorine used for disinfection, as among utilities’ top vulnerabilities. Experts cited the inherent danger of storing large cylinders of a chemical on site, noting that their destruction could release toxic gases in densely populated areas. Some noted, however, that this risk has been alleviated by utilities that have chosen to use the more stable liquid form of chlorine instead of the more vulnerable compressed gas canisters that have traditionally been used. Finally, experts identified overarching issues that compromise the integrity of multiple physical assets, or even the entire drinking water system. Among these is the lack of redundancy among vital systems. Many drinking water systems are “linear”—that is, they have single transmission lines leading into the treatment facility and single pumping stations along the system, and often use a single computer operating system. They also depend on the electric grid, transportation systems, and single sources of raw materials (e.g., treatment chemicals). Many experts expressed concern that problems at any of these “single points of failure” could render a system inoperable unless redundant systems are in place. Experts also cited the lack of sufficient information to understand the most significant threats confronting individual utilities. According to the American Water Works Association, assessments of the most credible threats facing a utility should be based on knowledge of the “threat profile” in its specific area, including information about past events that could shed light on future risks. Experts noted, however, that such information has been difficult for utilities to obtain. One expert suggested that the intelligence community needs to develop better threat information and share it with the water sector. Many drinking water utilities have been financing at least some of their security upgrades by passing along the costs to their customers through rate increases. Given the cost of these upgrades, however, the utility industry is also asking that the taxpayer shoulder some of the burden through the appropriations process. Should Congress and the administration agree to this request, they will need to address key issues concerning who should receive the funds and how they should be distributed. With this in mind, we asked our panel of experts to focus on the following key questions: (1) To what extent should utilities’ vulnerability and risk assessment information be considered in making allocation decisions? (2) What types of utilities should receive funding priority? and (3) What are the most effective mechanisms for directing these funds to recipients? Regarding the first of these questions, about 90 percent of the experts (39 of 43) agreed “strongly” or “somewhat” that funds should be allocated on the basis of vulnerability assessment information, with some citing the vulnerability assessments (VAs) required by the Bioterrorism Act as the best available source of this information. Several experts, however, pointed to a number of complicating factors. Perhaps the most significant constraint is the Bioterrorism Act’s provision precluding the disclosure of any information that is “derived” from vulnerability assessments submitted to EPA. The provision protects sensitive information about each utility’s vulnerabilities from individuals who may then use the information to harm the utility. Hence, the law specifies that only individuals designated by the EPA Administrator may have access to the assessments and related information. Yet, according to many of the experts, even those individuals may face constraints in using the information. They may have difficulty, for example, in citing vulnerability assessments to support decisions on allocating security-related funds among utilities, as well as decisions concerning research priorities and guidance documents. Others cited an inherent dilemma affecting any effort to set priorities for funding decisions based on the greatest risk—whatever does not receive attention becomes a more likely target. Regarding the second question concerning the types of utilities that should receive funding priority, 93 percent of the experts (40 of 43) indicated that utilities serving high-density population areas should receive a high or the highest priority in funding (See figure 4.). Fifty-five percent deemed this criterion as the highest priority. Most shared the view of one expert who noted that directing limited resources to protect the greatest number of people is a common factor when setting funding priorities. Experts also assigned high priority to utilities serving critical assets, such as national icons representing the American image, military bases, and key government, academic, and cultural institutions. At the other end of the spectrum, only about 5 percent of the experts (2 of 43) stated that utilities serving rural or isolated populations should receive a high or highest priority for federal funding. These two panelists commented that such facilities are least able to afford security enhancements and are therefore in greatest need of federal support. Importantly, the relatively small percentage of experts advocating priority for smaller systems may not fully reflect the concern among many of the experts for the safety of these utilities. For example, several who supported higher priority for utilities serving high-density populations cautioned that while problems at a large utility will put more people at risk, utilities serving small population areas may be more vulnerable because of weaker treatment capabilities, fewer highly trained operators, and more limited resources. Regarding the mechanisms for distributing federal funds, 86 percent of the experts (37 of 43) indicated that direct grants would be “somewhat” or “very” effective in allocating federal funds (See figure 5.) One expert cited EPA’s distribution of direct security-related grant funds in 2002 to larger systems to perform their VAs as a successful initiative. Importantly, 74 percent also supported a matching requirement for such grants as somewhat or very effective. One expert pointed out that such a requirement would effectively leverage limited federal dollars, thereby providing greater incentive to participate. The Drinking Water State Revolving Fund (DWSRF) received somewhat less support as a mechanism for funding security enhancements. About half of the experts (22 of 43) indicated that the fund would be somewhat or very effective in distributing federal funds, but less than 10 percent indicated that it would be very effective. One expert cautioned that the DWSRF should be used only if a process were established that separated funding for security-related needs from other infrastructure needs. Others stated that as a funding mechanism, the DWSRF would not be as practical as other mechanisms for funding improvements requiring immediate attention, but would instead be better suited for longer-term improvements. When experts were asked to identify specific security-enhancing activities most deserving of federal support, their responses generally fell into three categories: (1) physical and technological upgrades to improve security and research to develop technologies to prevent, detect, or respond to an attack, (2) education and training to support, among other things, simulation exercises to provide responders with experience in carrying out emergency response plans, and specialized training of utility security staff; and (3) strengthening key relationships between water utilities and other agencies that may have key roles in an emergency response, such as public health agencies, law enforcement agencies, and neighboring drinking water systems. As illustrated in figure 6, specific activities to enhance physical security and support technological improvements generally fell into nine subcategories. Of these, the development of “near real-time monitoring technologies,” capable of providing near real-time data for a wide array of potentially harmful water constituents, received far more support for federal funding than any other subcategory—over 93 percent of the experts (40 of 43) rated this subcategory as deserving at least a high priority for federal funding. More significantly, almost 70 percent (30 of 43) rated it the highest priority—far surpassing the rating of any other category. These technologies were cited as critical in efforts to quickly detect contamination events, minimize their impact, and restore systems after an event has passed. The experts’ views were consistent with those of the National Academies of Science, which in a 2002 report highlighted the need for improved monitoring technologies as one of four highest priority areas for drinking water research and development. The report noted that such technologies differ significantly from those currently used for conventional water quality monitoring, stating further that sensors are needed for “better, cheaper, and faster sensing of chemical and biological contaminants.” In addition to real-time monitoring technologies, the experts voiced strong support for (1) increasing laboratories’ capacity to deal with spikes in demand caused by chemical, biological, or radiological contamination of water supplies, and (2) “hardening” the physical assets of drinking water facilities through improvements such as adding or repairing fences, locks, lighting systems, and cameras and other surveillance equipment. Regarding the latter of these two, however, some experts cited inherent limitations in attempting to comprehensively harden a drinking water facility’s assets. In particular, they noted in particular that, unlike nuclear power or chemical plants, a drinking water system’s assets are spread over large geographic areas, particularly the source water and distribution systems. Regarding efforts to improve education and training, over 90 percent of the experts (39 of 43) indicated that improved technical training for security-related personnel warrants at least a high priority for federal funding. (See figure 7.) Over 55 percent (24 of 43) indicating that it deserved the highest priority. To a lesser extent, experts supported general training for other utility personnel to increase their awareness of security issues. The panelists also underscored the importance of conducting regional simulation exercises to test emergency response plans, with more than 88 percent (38 of 43) rating this as a high or highest priority for federal funding. Such exercises are intended to provide utility and other personnel with the training and experience needed both to perform their individual roles in an emergency and to coordinate these roles with other responders. Finally, about half the experts assigned at least a high priority to supporting multidisciplinary consulting teams (“Red Teams”), comprising individuals with a wide array of backgrounds, to provide independent analyses of utilities’ vulnerabilities. As illustrated in figure 8, experts also cited the need to improve cooperation and coordination between drinking water utilities and certain other organizations as key to improving utilities’ security. Among the organizations most often identified as critical to this effort are public health and law enforcement agencies, which have data that can help utilities better understand their vulnerabilities and respond to emergencies. In addition, the experts cited the value of utilities’ developing mutual aid arrangements with neighboring utilities. Such arrangements sometimes include, for example, sharing back-up power systems or other critical equipment. One expert described an arrangement in the San Francisco Bay Area—the Bay Area Security Information Collaborative (BASIC)—in which eight utilities meet regularly to address security-related topics. Finally, over 90 percent of the experts (39 of 43) rated the development of common protocols among drinking water utilities to monitor drinking water threats as warranting a high or highest priority for federal funding. Drinking water utilities vary widely in how they perceive threats and detect contamination, in large part because few common protocols exist that would help promote a more consistent approach toward these critical functions. Some experts noted, in particular, the need for protocols to guide the identification, sampling, and analysis of contaminants. In 2002, EPA’s Strategic Plan on Homeland Security set forth the goal of significantly reducing unacceptable security risks at water utilities across the country by completing appropriate vulnerability assessments; designing security enhancement plans; developing emergency response plans; and implementing security enhancements. The plan further committed to providing federal resources to help accomplish these goals as funds are appropriated. Key judgments about which recipients should get funding priority, and how those funds should be spent, will have to be made in the face of great uncertainty about the likely targets of attacks, the nature of attacks (whether physical, cyber, chemical, biological, or radiological), and the timing of attacks. The experts on our panel have had to consider these uncertainties in developing their own judgments about these issues. These judgments, while not unanimous on all matters, suggested a high degree of consensus on a number of key issues. We recognize that such sensitive decisions must ultimately take into account political, equity, and other considerations. But we believe they should also consider the judgments of the nation’s most experienced individuals regarding these matters, such as those included on our panel. It is in this context that we offer the results presented in this testimony as information for Congress and the administration to consider as they seek the best way to use limited financial resources to reduce threats to the nation’s drinking water supply. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of this Subcommittee may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | After the events of September 11, 2001, Congress appropriated over $140 million to help drinking water systems assess their vulnerabilities to terrorist threats and to develop response plans. Utilities are asking for additional funding, however, not only to plan security upgrades but also to support their implementation. This testimony is based on GAO's report, Drinking Water: Experts' Views on How Future Federal Funding Can Best Be Spent to Improve Security ( GAO-04-29 , October 31, 2003). Specifically, GAO sought experts' views on (1) the key security-related vulnerabilities affecting drinking water systems, (2) the criteria for determining how federal funds are allocated among drinking water systems to improve their security, and the methods by which those funds should be distributed, and (3) specific activities the federal government should support to improve drinking water security. GAO's expert panel cited distribution systems as among the most vulnerable physical components of a drinking water utility, a conclusion also reached by key research organizations. Also cited were the computer systems that manage critical utility functions; treatment chemicals stored on-site; and source water supplies. Experts further identified two key factors that constitute overarching vulnerabilities: (1) a lack of the information individual utilities need to identify their most serious threats and (2) a lack of redundancy in vital system components, which increases the likelihood an attack could render an entire utility inoperable. According to over 90 percent of the experts, utilities serving high-density areas deserve at least a high priority for federal funding. Also warranting priority are utilities serving critical assets, such as military bases, national icons, and key academic institutions. Direct federal grants were clearly the most preferred funding mechanism, with over half the experts indicating that such grants would be "very effective" in distributing funds to recipients. Substantially fewer recommended using the Drinking Water State Revolving Fund for security upgrades. When asked to identify specific security-enhancing activities most deserving of federal support, experts' responses generally fell into three categories: (1) physical and technological upgrades to improve security and research to develop technologies to prevent, detect, or respond to an attack (experts most strongly supported developing near real-time monitoring technologies to quickly detect contaminants in treated drinking water on its way to consumers); (2) education and training to support, among other things, simulation exercises to provide responders with experience in carrying out emergency response plans; specialized training of utility security staff; and multidisciplinary consulting teams to independently analyze systems' security preparedness and recommend improvements; and (3) strengthening key relationships between water utilities and other agencies that may have key roles in an emergency response, such as public health agencies, law enforcement agencies, and neighboring drinking water systems; this category also includes developing protocols to encourage consistent approaches to detecting and diagnosing threats. |
The key objectives of U.S. public diplomacy are to engage, inform, and influence overseas audiences. Public diplomacy is carried out through a wide range of programs that employ person-to-person contacts; print, broadcast, and electronic media; and other means. Traditionally, U.S. public diplomacy focused on foreign elites—current and future overseas opinion leaders, agenda setters, and decision makers. However, the dramatic growth in global mass communications and other trends have forced a rethinking of this approach, and State has begun to consider techniques for communicating with broader foreign audiences. The BBG, as the overseer of U.S. international broadcasting efforts, supports U.S. public diplomacy’s key objectives by broadcasting fair and accurate information about the United States, while maintaining its journalistic independence as a news organization. The BBG manages and oversees the Voice of America (VOA), WorldNet Television, Radio/TV Marti, Radio Free Europe/Radio Liberty, Radio Farda, the Middle East Television Network (which consists of Radio Sawa and Alhurra, the Board’s new Arabic language television station), the Afghanistan Radio Network, and Radio Free Asia. Radio Sawa, Alhurra, and Radio Farda (Iran), provide regional and local news to countries in the Middle East. Together, State and the BBG spend in excess of $1 billion on public diplomacy programs each year. State’s public diplomacy budget totaled an estimated $628 million in fiscal year 2004. About 51 percent, or $320 million, is slated for the Fulbright and other educational and cultural exchange programs. The remainder covers mostly salaries and expenses incurred by State and embassy officers engaged in information dissemination, media relations, cultural affairs, speaker programs, publications, and other activities. BBG’s budget for fiscal year 2004 is $546 million. This includes more than $42 million for radio and television broadcasting to the Middle East. Since initiating the language service review process in 1999, the Board has reduced the scope of operations of more than 25 language services and reallocated about $19.7 million in funds, with the majority redirected toward Central Asia and the Middle East, including $8 million for Radio Farda service to Iran. Since September 11, 2001, State has expanded its efforts in Muslim- majority countries that are considered strategically important in the war on terrorism. State significantly increased the program funding and number of Foreign Service officers in its bureaus of South Asian and Near Eastern Affairs. State has also launched a number of new initiatives targeting broader, younger audiences—particularly in predominantly Muslim countries—that include expanding exchange programs targeting citizens of Muslim countries, informing foreign publics about U.S. policies in the war on terrorism, and demonstrating that Americans and Muslims share certain values. The BBG has also targeted recent initiatives to support the war on terrorism, including Radio Sawa in the Middle East; the Afghanistan Radio Network; and the new Radio Farda service to Iran. In addition, the Board expanded its presence in the Middle East through the launch of the Alhurra satellite television network in mid-February 2004. The 9/11 Commission recommended that the United States rely on such programs and activities to vigorously defend our ideals abroad, just as the United States did during the Cold War. Since September 11, 2001, the State Department has increased its resources and launched various new initiatives in predominantly Muslim countries. For example, while State’s bureau of Europe and Eurasia still receives the largest overall share of overseas public diplomacy resources, the largest percentage increases in such resources since September 11 occurred in State’s bureaus of South Asian and Near Eastern Affairs, where many countries have significant Muslim populations. Public diplomacy funding increased in South Asia from $24 million to $39 million and in the Near East from $39 million to $62 million, or by 63 and 58 percent, respectively, from fiscal year 2001 through 2003. During the same period, authorized American Foreign Service officers in South Asia increased from 27 to 31 and in the Near East from 45 to 57, or by 15 percent and 27 percent, respectively. Furthermore, in 2002, State redirected 5 percent of its exchange resources to better support the war on terrorism and to strengthen U.S. engagement with Muslim countries. In 2003, State has continued to emphasize exchanges with Muslim countries through its Partnership for Learning Program—designed to target young and diverse audiences through academic and professional exchanges such as the Fulbright, International Visitor, and Citizen Exchange programs. According to State, under this program, 170 high school students from predominantly Islamic countries have already arrived and are living with American families and studying at local high schools. State has also carried out increased exchanges through its Middle East Partnership Initiative, which includes computer and English language training for women newly employed by the Afghan government and a program to assist women from Arab countries and elsewhere in observing and discussing the U.S. electoral process. In addition, State is expanding its American Corners program, as recommended by the Advisory Group on Public Diplomacy in October 2003. This program uses space in public libraries and other public buildings abroad to provide information about the United States. In fiscal year 2004, State is planning to establish 58 American Corners in the East and South Asia. In fiscal year 2005, State plans to open 10 in Afghanistan and 15 in Iraq. State’s Office of International Information Programs has also developed new initiatives to support the war on terrorism, including a print and electronic pamphlet titled The Network of Terrorism, distributed in 36 languages via hard copy, the Web, and media throughout the world, which documented the direct link between the September 11 perpetrators and al Qaeda; and a publication titled Iraq: From Fear to Freedom to inform foreign audiences of the administration’s policies toward Iraq. Several of the BBG’s new initiatives focus on reaching large audiences in priority markets and supporting the war on terrorism. The first of these programs, Radio Sawa in the Middle East, was launched in March 2002 using modern, market-tested broadcasting techniques and practices, such as the extensive use of music formats. Radio Sawa replaced the poorly performing VOA Arabic service, which had listening rates at around 2 percent of the population. According to BBG survey research, Radio Sawa is reaching 51 percent of its target audience and is ranked highest for news and news trustworthiness in Amman, Jordan. Despite such results, it remains unclear how many people Radio Sawa reaches throughout the entire Middle East because audience research has been performed only in selected markets. Further, the State Inspector General and the Advisory Group on Public Diplomacy for the Arab and Muslim World have raised questions about whether Radio Sawa has focused more on audience size and composition than on potential impact on attitudes in the region. The BBG has also launched the Afghanistan Radio Network and a language service to Iran called Radio Farda. Estimated costs for these three initiatives through fiscal year 2003 are about $116 million. In addition, the Board started Alhurra, an Arabic language television network in the Middle East, in mid-February 2004. While the growth in programs to the Muslim world marks the recognition of the need to increase diplomatic channels to this population, there still is no interagency strategy to guide State’s and all federal agencies’ communication efforts and ensure consistent messages to overseas audiences. In addition, as of June 2004, State still lacked a comprehensive and commonly understood public diplomacy strategy to guide its programs. We agree with the 9/11 Commission recommendation that the U.S. government must define its message. State also is not systematically or comprehensively measuring progress toward its public diplomacy goals. In addition, we found that , although BBG has a strategic plan, the plan lacks a long-term strategic goal or related program objective to gauge the Board’s success in increasing audience size. Further, the BBG’s plan contains no measurable program objectives to support the plan’s strategic goals or to provide a basis for assessing the Board’s performance. Since our report, however, the Board revised its strategic plan and has improved its ability to gauge its program effectiveness measures by adding broadcast credibility and audience awareness measures. The Board also plans to add additional performance measures, such as whether broadcast entities are achieving their mandated missions. No interagency public diplomacy strategy has been implemented that lays out the messages and means for governmentwide communication efforts to overseas audiences. The absence of an interagency strategy complicates the task of conveying consistent messages and thus achieving mutually reinforcing benefits. State officials told us that, without such a strategy, the risk of making communication mistakes that are damaging to U.S. public diplomacy efforts is high. They also said that the lack of a strategy diminishes the efficiency and effectiveness of governmentwide public diplomacy efforts. Our fieldwork in Egypt and Morocco underlined the importance of interagency coordination. Embassy officers there told us that only a very small percentage of the population was aware of the magnitude of U.S. assistance provided to their countries. Egypt is the second largest recipient of U.S. assistance in the world, with assistance totaling more than an estimated $1.9 billion in 2003. Assistance to Morocco totaled more than $13 million in 2003. Most interagency communication coordination efforts have been ad hoc in recent years. Immediately after September 11, 2001, the White House, State Department, Department of Defense, and other agencies coordinated various public diplomacy efforts on a day-to-day basis, and the White House established a number of interim coordination mechanisms. One such mechanism was the joint operation of the Coalition Information Centers in Washington, London, and Islamabad, set up during the early stages of U.S. military operations in Afghanistan in 2001. The centers were designed to provide a rapid response capability for correcting inaccurate news stories, proactively dealing with news items likely to generate negative responses overseas, and optimizing reporting of news favorable to U.S. efforts. In January 2003, the President established a more permanent coordination mechanism, the White House Office of Global Communications, which is intended to coordinate strategic communications from the U.S. government to overseas audiences. The President also established the Strategic Communication Policy Coordinating Committee, co-chaired by the State Department and the National Security Council and to work closely with the Office of Global Communications, to ensure interagency coordination in disseminating the U.S. message across the globe. Although it is the committee’s long-term objective to develop a National Communications Strategy, according to recent conversations with U.S. officials, the committee has not met since March 2003. After September 11, State acknowledged the need for a strategy that integrates all of its diverse public diplomacy activities and directs them toward common objectives, but to date, that strategy is still in the development stage. State officials told us that such a strategy is particularly important because State’s public diplomacy operation is fragmented among the various organizational entities within the agency. Public affairs officers who responded to our survey indicated that the lack of a strategy has hindered their ability to effectively execute public diplomacy efforts overseas. More than 66 percent of public affairs officers in one region reported that the quality of strategic guidance from the Office of the Undersecretary at the time of our review (10/01-3/03) was generally insufficient or very insufficient. More than 40 percent in another region reported the same. We encountered similar complaints during our overseas fieldwork. For example, in Morocco, the former public affairs officer stated that so little information had been provided from Washington on State’s post-September 11 public diplomacy strategy that he had to rely on newspaper articles and guesswork to formulate his in- country public diplomacy plans. During our audit work, we learned that private sector public relations efforts and political campaigns use sophisticated strategies to integrate complex communication efforts involving multiple players. Although State’s public diplomacy efforts extend beyond the activities of public relations firms, many of the strategic tools that such firms employ are relevant to State’s situation. While it is difficult to establish direct links between public diplomacy programs and results, other U.S. government agencies and the private sector have best practices for assessing information disseminating campaigns, including the need to define success and how it should be measured. Executives from some of the largest public relations firms in the United States told us that initial strategic decisions involve establishing the scope and nature of the problem, identifying the target audience, determining the core messages, and defining both success and failure. Subsequent steps include conducting research to validate the initial decisions, testing the core messages, carrying out pre-launch activities, and developing information materials. Each of these elements contains numerous other steps that must be completed before implementing a tactical program. Further, progress must be measured continuously and tactics adjusted accordingly. We also found that State is not systematically and comprehensively measuring progress toward its public diplomacy goals. Its overseas performance measurement efforts focus on anecdotal evidence and program outputs, rather than gauging progress toward changing foreign publics’ understanding and attitudes about the United States. Some posts judge the effectiveness of their public diplomacy efforts by simply counting the number of public diplomacy activities that occur in their host country—for example, the number of speeches given by the ambassador or the number of news articles placed in the host-country media. While such measures shed light on the level of public diplomacy activity, they reveal little in the way of overall program effectiveness. State currently has no reporting requirements in place to determine whether posts’ performance targets are actually met. At one overseas post we visited, the post had identified polling data showing that only 22 percent of the host country’s citizens had a favorable view of the United States—a figure the post used as a baseline with yearly percentage increases set as targets. However, a former public affairs officer at the post told us that he did not attempt to determine or report on whether the post had actually achieved these targets because there was no requirement to do so. Officials at the other two overseas posts we visited also cited the lack of any formal reporting requirement for following up on whether they met their annual performance targets. An official in State’s Office of Strategic and Performance Planning said that they have now begun to require posts to report on whether they have met performance targets. Furthermore, public affairs officers at U.S. embassies generally do not conduct systematic program evaluations. About 79 percent of the respondents to our survey reported that staffing at their missions was insufficient to conduct systematic program evaluations. Many officers also reported that staffing at posts was insufficient to carry out the long-range monitoring required to adequately measure program effectiveness. Even if sufficient staffing were available, State would still have difficulty conducting long-range tracking of exchange participants because it lacks a database with comprehensive information on its various exchange program alumni. State had planned to begin building a new worldwide alumni database with comprehensive data linking all of its various exchange programs. However, Bureau of Educational and Cultural Affairs officials told us they had received insufficient funds to do so, and thus are seeking to improve existing information systems for individual exchange programs. In contrast to State’s lack of strategy, BBG has introduced a market-based approach to international broadcasting that aims to generate large listening audiences in priority markets that the Board believes it must reach to effectively meet its mission. Early implementation of this strategy has focused on markets relevant to the war on terrorism, in particular the Middle East. The Board’s vision is to create a flexible, multimedia, research-driven U.S. international broadcasting system that addresses the many challenges we noted in our report, including an organizational structure that consists of several broadcast entities with differing missions, broadcast approaches, and constituencies. In conducting our work on the BBG strategic plan, we found that the plan did not include a single goal or related program objective designed to gauge progress toward increasing audience size, even though its strategy focuses on the need to reach large audiences in priority markets. We also found that the plan lacked measurable program objectives to support its strategic goals, including a broadcaster credibility measure. The Board has taken several steps to address the recommendations we made in our report. First, the Board created a single strategic goal to focus on the key objective of maximizing impact in priority areas of interest to the United States and made audience size a key performance measure. Second, the Board has added broadcast credibility and plans to add the additional performance measures we recommended, including audience awareness and whether broadcast entities are achieving their mandated missions. Mr. Chairman, I have discussed the expansion of U.S. public diplomacy resources to areas of the world thought to breed terrorist activities and the need for a more cohesive, integrated U.S. public diplomacy strategy with measurable indicators of progress. There are other challenges our government faces in executing successful public diplomacy activities. According to public affairs officers, these challenges include insufficient time and staffing resources to conduct public diplomacy tasks. In addition, many public affairs officers reported that the time available to attend public diplomacy training is inadequate. Furthermore, a significant number of Foreign Service officers involved in public diplomacy efforts overseas lack sufficient foreign language skills. The Board’s key challenge in executing its strategy is how to generate large audiences while dealing with a number of media market, organizational, and resources issues. More than 40 percent of the public affairs officers we surveyed reported that the amount of time they had to devote exclusively to executing public diplomacy tasks was insufficient. During our overseas fieldwork, officers told us that, while they manage to attend U.S. and other foreign embassy receptions and functions within their host country capitals, it was particularly difficult to find time to travel outside the capitals to interact with ordinary citizens. More than 50 percent of those responding to our survey reported that the number of Foreign Service officers available to perform public diplomacy duties was inadequate. Although State increased the actual number of Americans in public diplomacy positions overseas from 414 in fiscal year 2000 to 448 in fiscal year 2002, State still had a shortfall of public diplomacy staff in 2002, based on the projected needs identified in State’s 2002 overseas staffing model. In 2002, State’s overseas staffing model projected the need for 512 staff in these positions; however, 64 of these positions, or 13 percent, were not filled. In addition, about 58 percent of the heads of embassy public affairs sections reported that Foreign Service officers do not have adequate time for training in the skills required to effectively conduct public diplomacy. We reported in 2002 that as part of its Diplomatic Readiness Initiative, State has launched an aggressive recruiting program to rebuild the department’s total workforce. Under this initiative, State requested 1,158 new employees above attrition over the 3-year period for fiscal years 2002 through 2004, and according to State officials, the department has met its hiring goals under this initiative for fiscal years 2002 and 2003. However, it does not have numerical targets for specific skill requirements such as language proficiency or regional expertise. Although State officials are optimistic that enough new hires are being brought in to address the overall staffing shortage, there are no assurances that the recruiting efforts will result in the right people with the right skills needed to meet specific critical shortfalls. Insufficient foreign language skills pose another problem for many officers. As of December 31, 2002, 21 percent of the 332 Foreign Service officers filling “language-designated” public diplomacy positions overseas did not meet the foreign language speaking requirements of their positions. The highest percentages not meeting the requirements were in the Near East, where 30 percent of the officers did not meet the requirement. Although State had no language-designated positions for South Asia, it had eight language-preferred positions, none of which was filled by officers who had reading or speaking capability in those languages. It is important to note that most of the foreign languages required in these two regions, such as Arabic and Urdu, are considered difficult to master. In contrast, 85 percent of the officers filling French language-designated positions and 97 percent of those filling Spanish language-designated ones met the requirements. Officers’ opinions on the quality of the foreign language training they received also varied greatly by region. The Advisory Group on Public Diplomacy noted this challenge and recommended an increase in public diplomacy staff dedicated to issues of the Arab and Muslim world, with specific emphasis on enhancing fluency in local languages. Foreign Service officers posted at the overseas embassies we visited and other State officials told us that having fluency in a host country’s language is important for effectively conducting public diplomacy. The foreign government officials with whom we met in Egypt, Morocco, and the United Kingdom agreed. They noted that, even in countries where English is widely understood, speaking the host country’s language demonstrates respect for its people and its culture. In Morocco, officers in the public affairs and other sections of the embassy told us that, because their ability to speak Arabic was poor, they conducted most embassy business in French. French is widely used in that country, especially in business and government. However, embassy officers told us that speaking Arabic would provide superior entrée to the Moroccan public. The ability to speak country-specific forms of Arabic and other more obscure dialects would generate even more goodwill, especially outside the major cities. According to the department, the largest and most significant factor limiting its ability to fill language-designated positions is its long-standing staffing shortfall, which State’s Diplomatic Readiness Initiative is designed to fill. Other planned actions include bolstering efforts to recruit job candidates with target language skills, sending language training supervisors to posts to determine ways to improve training offerings, and developing a new “language continuum” plan to guide efforts to meet the need for higher levels of competency in all languages, especially those critical to national security concerns. The Broadcasting Board of Governors has its own set of public diplomacy challenges, key among them is how to gain large audiences in priority markets while dealing with (1) a collection of outdated and noncompetitive language services, (2) a disparate organizational structure consisting of seven separate broadcast entities and a mix of federal agency and grantee organizations that are managed by a part-time Board of Governors, and (3) the resource challenge of broadcasting in 97 language services to more than 125 broadcast markets worldwide. Although its strategic plan identifies a number of solutions to the competitive challenges the Board faces and provides a new organizational model for U.S. international broadcasting, we found that the Board’s plan did not include specifics on implementation strategies, resource requirements, project time frames, or a clear vision of the Board’s intended scope of operations. The Board recently completed a review of the overlap issue and identified six approaches to addressing the problem while still meeting the discrete missions of the Voice of America and other broadcast entities. All of the Board’s overlapping services were assessed against this analytical framework, and more than $9.7 million in potential savings for priority initiatives were identified. However, the Board has yet to revise its strategic plan to include details on implementation strategies, resource requirements, and project timeframes for the various initiatives supporting its overarching strategic goal of increasing program impact. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. For future contacts regarding this testimony, please call Jess Ford or Diana Glod at (202) 512-4128. Individuals making key contributions to this testimony included Robert Ball, Lynn Cothern, and Michael ten Kate. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Polls taken in Islamic countries after 9/11 suggested that many or most people had a favorable view of the United States and its fight against terrorism. By 2003, opinion research indicated that foreign publics, especially in countries with large Muslim populations, viewed the United States unfavorably. GAO issued two studies in 2003 that examined (1) changes in U.S. public diplomacy resources and programs since September 11, 2001, within the State Department (State) and the Broadcasting Board of Governors (BBG); (2) the U.S. government's strategies for its public diplomacy programs and measures of effectiveness; and (3) the challenges that remain in executing U.S. public diplomacy efforts. GAO made several recommendations to State and the BBG to address planning and performance issues. Both agencies agreed with these recommendations and have made some progress in implementing them. On July 22, 2004, the 9/11 Commission released its report and recommendations. Two of the Commission's recommendations relate to the management of U.S. public diplomacy. For this testimony, GAO was asked to discuss its prior work as it relates to these recommendations. Since September 11, 2001, State has expanded its public diplomacy efforts in Muslim-majority countries considered to be of strategic importance in the war on terrorism. It significantly increased resources in South Asia and the Near East and launched new initiatives targeting broader, younger audiences--particularly in predominantly Muslim countries. These initiatives are consistent with the 9/11 Commission's recommendation that the United States rebuild its scholarship, library, and exchange programs overseas. Since 9/11, the BBG has initiated several new programs focused on attracting larger audiences in priority markets, including Radio Sawa and Arabic language television in the Middle East, the Afghanistan Radio Network, and Radio Farda in Iran. The 9/11 Commission report highlights these broadcast efforts and recommends that funding for such efforts be expanded. While State and BBG have increased their efforts to support the war on terrorism, we found that there is no interagency strategy to guide State's, BBG's, and other federal agencies' communication efforts. The absence of such a strategy complicates the task of conveying consistent messages to overseas audiences. Likewise, the 9/11 Commission recommended that the United States do a better job defining its public diplomacy message. In addition, we found that State does not have a strategy that integrates and aligns all its diverse public diplomacy activities. State, noting the need to fix the problem, recently established a new office of strategic planning for public diplomacy. The BBG did have a strategic plan, but the plan lacked a long-term strategic goal or related program objective to gauge the Board's success in increasing audience size, the key focus of its plan. We also found that State and the BBG were not systematically and comprehensively measuring progress toward the goals of reaching broader audiences and increasing publics' understanding about the United States. The BBG subsequently made audience size a key performance goal and added broadcaster credibility and plans to add other performance measures that GAO recommended. In addition, State and BBG face several internal challenges in carrying out their programs. Challenges at State include insufficient public diplomacy resources and a lack of officers with foreign language proficiency. State officials are trying to address staffing gaps through increased recruitment. The BBG also faces a number of media market, organizational, and resource challenges that may hamper its efforts to generate large audiences in priority markets. It has developed a number of solutions to address these challenges. |
Scientists have discovered that changes in the earth’s climate are induced by the increasing concentrations of certain gases in the earth’s atmosphere—some naturally occurring, others human-induced—that have the potential to significantly alter the planet’s heat and radiation balance. These so-called “greenhouse gases” trap some of the sun’s energy and prevent it from returning to space. The trapped energy warms the earth’s climate, much like glass in a greenhouse. Over the past century, humans have contributed to the greenhouse effect, particularly by burning fossil fuels, which increased atmospheric carbon dioxide and other greenhouse gases. The effects of a warmer climate could have important consequences for human health and welfare by, among other things, altering weather patterns, changing crop yields, and leading to the flooding of coastal areas. According to the Department of Energy’s Energy Information Administration (EIA), in 2001, the most recent year for which data are available, the United States and other developed nations accounted for just under half (47 percent) of the world’s emissions of carbon dioxide—the most prevalent greenhouse gas. The other emissions came from economically developing nations, including China, India, and Mexico (40 percent), and from nations with economies in transition (13 percent) in Europe and the Former Soviet Union. EIA projects that, over the next 2 decades, carbon dioxide emissions from each of the three nation groups will increase; however, carbon dioxide emissions from developing nations will increase most dramatically, surpassing those of developed nations by 2015, as shown in figure 1. More specifically, figure 2 shows actual and projected carbon dioxide emissions for the seven nations in our study. Growth in emissions between 2001 and 2025 is projected to range from 29 million metric tons for the United Kingdom to 1,012 for China. The seven nations in our study also differ greatly in terms of their population and per capita income (an indicator of economic development). For example, population ranged from about 60 million in the United Kingdom to nearly 1.3 billion in China, and per capita income ranged from $2,540 in India to $36,300 in the United States. (See table 1.) Under the Framework Convention, the United States and the other parties generally agreed to implement policies and measures aimed at returning “individually or jointly to their 1990 levels these anthropogenic [human- caused] emissions” of greenhouse gases not covered by another treaty, the Montreal Protocol. The six primary gases covered by the Framework Convention are carbon dioxide, nitrous oxide, methane, and three synthetic gases—sulfur hexafluoride, hydrofluorocarbons, and perfluorocarbons. Emissions of these gases are generally not measured because doing so would be too costly; consequently, they must be estimated. In this regard, the IPCC, at the parties’ request, developed detailed guidance on methodologies for nations to use when estimating their emissions and revised that guidance twice, most recently in 1999. Both developed and developing nations are required to follow this guidance—Revised 1996 IPCC Guidelines for National Greenhouse Gas Inventories—when preparing their inventories. In addition, in 2000, the IPCC published—also at the parties’ request—its Good Practice Guidance and Uncertainty Management in National Greenhouse Gas Inventories, which contains information on prioritizing tasks to arrive at the best possible estimates using finite resources as well as advice on establishing quality assurance programs, among other things. The nations have been encouraged, but not required, to follow the good practice guidance. The parties to the Framework Convention also agreed to report periodically to the Secretariat on their levels of greenhouse gas emissions. For Annex I nations, those reports are extensive. Annually, each Annex I nation is required to submit inventory data—in a common reporting format the parties themselves agreed to—as well as a national inventory report that explains how the data in the common reporting format were derived. The common reporting format calls for data for each of the six emissions sectors—energy, industrial processes, solvent and other product use, agriculture, land-use change and forestry, and waste—as well as for the data on the major sources that contribute to emissions from each sector. The inventory data are to reflect a nation’s most recent reporting year as well as all previous years back to the base year, which is 1990. For each year, the common reporting format calls for 42 tables containing over 8,100 items that are sector-specific numbers; data summarized across sectors; and other information, such as trends from the base year to the current reporting year, recalculations of prior years’ data, and reasons certain emissions were not estimated. The parties require that data be submitted in the common reporting format to facilitate comparison across nations and to make it easier to review the data. Because an inventory contains data from the base year to the most recent reporting year, each year’s submission is larger than the last. The 2003 reporting format called for approximately 98,000 items of inventory data and other information from 1990 through 2001. The national inventory report, the second component of the submission, should be detailed and complete enough to enable reviewers to understand and evaluate the inventory. The report should include, among other things, descriptions of the methods used to estimate the data, the rationale for selecting the methods used, and information about the complexity of methods and the resulting precision of the estimates; information on quality assurance procedures used; discussion of any recalculations affecting previously submitted inventory data; and information on improvements planned for future inventories. Each year, when Secretariat staff receive Annex I nations’ submissions, they perform an initial check to determine whether the submissions are complete and then synthesize the information to facilitate comparison across nations. Teams of expert reviewers—comprising members chosen by the parties for their sector expertise as well as to achieve broad geographic representation—also use this synthesized information to identify issues requiring clarification during their reviews of individual submissions. From 2000 through 2002, the parties tested the usefulness of three methods of conducting expert reviews on selected submissions from Annex I nations. The first type of review, called a desk review, consists of about 10 experts spending about 4 weeks in their respective nations reviewing information on the same three nations’ inventories. For this type of review, the experts communicate with each other and the nation being reviewed via the Internet and telephone. The second type of review, called a centralized review, involves about 10 experts spending about a week at the Secretariat’s headquarters in Bonn, Germany, jointly reviewing between four and six nations’ inventories. The third review type, called an in- country review, consists of a team of about 5 experts spending a week in the nation whose inventory is being reviewed, jointly examining the nation’s inventory and supporting information. The Secretariat chose inventories of different levels of completeness to undergo desk and centralized reviews; only nations that volunteered for an in-country review received one. During the 3-year test period, the experts examined the data and supporting information the nations used to prepare the inventories via all three types of reviews. For example, the experts determined whether a nation calculated its emissions estimates using formulas from published data sources or formulas specified by the parties. The experts also verified the information provided in response to questions raised in previous reviews. Finally, the experts summarized the inventories' strengths and weaknesses; made recommendations for improvement, if warranted; and presented their findings in reports that were both published and posted on the Internet. For Annex I nations’ submissions to be reviewed by the experts, the submissions must meet two criteria. Since 2000, the experts have reviewed only submissions that presented their data in the common reporting format, and, beginning with the 2003 submissions, the experts will review only submissions that include the national inventory report. According to the parties to the Framework Convention, the goal of the expert reviews is to identify areas in the inventories needing improvement; for this reason, the experts’ reports do not rate the overall quality of the submissions, and the reports do not identify some findings as being more important than others. According to the Secretariat, since 1998, Annex I nations’ submissions have steadily and substantially improved in their timeliness and completeness, and the expert review process has contributed to the improved quality of recent submissions. Non-Annex I nations’ requirements for format and frequency of reporting differ from those for Annex I nations. Although all parties to the Framework Convention are to develop their inventories using the revised 1996 IPCC guidelines and submit the inventories to the Secretariat, non- Annex I nations’ inventories are not stand-alone documents. Rather, a non- Annex I nation’s inventory is a component of its national communication, which is a report it must submit to the Secretariat that discusses all of the steps the nation is taking or plans to take to implement the Framework Convention. In addition, non-Annex I nations are not required to use the common reporting format or to submit a national inventory report. Moreover, non-Annex I nations are not required to submit an inventory each year but may instead negotiate the frequency of their submissions. To date, most non-Annex I nations negotiated a deadline for only one inventory. To help the non-Annex I nations develop and report their inventories, the developed nations of Annex I provide financial assistance that is disbursed through the convention’s financial mechanism, the Global Environment Facility. The facility, which funds various types of environmental projects in developing nations, disburses the funds, including those to assist non-Annex I nations with their emissions reporting, through implementing agencies, such as the United Nations Development Program. The implementing agencies, in turn, disburse the funds to the nations on a schedule and according to terms negotiated by the agency and each nation. The inventory reviews and the extent to which the results are reported also differ for Annex I and non-Annex I nations. Reviews of Annex I nations’ submissions focus on compliance with reporting standards, and the results are made publicly available in considerable detail. In contrast, because non-Annex I nations are generally in the early stages of developing their inventories and have limited resources to do so, assessments of their submissions, and the resulting reports, focus largely on providing a forum for the non-Annex I nations to exchange information on common reporting problems and best practices. Consequently, while the Secretariat makes reports on the results of non-Annex I assessments publicly available, it does so in summary format and provides only a few nation-specific details in tables that accompany the aggregated reports. The most recent expert reviews of inventories submitted by the four developed nations found that the U.K. and U.S. inventories contained most of the required elements, but the German and Japanese inventories were missing certain critical elements. Experts reviewed inventories variously submitted from 2000 through 2002 by each of the four developed nations in our study. The inventories submitted by Japan and Germany in 2000 and 2001, respectively, each received a centralized review. Two U.K. inventories were reviewed: the one submitted in 2000 received an in-country review, and the one submitted in 2002 received a desk review. The inventory that the United States submitted in 2000 received both an in-country review and a desk review. Although the experts planned to conduct reviews of all Annex I nations’ inventories submitted in 2003, no results were available at the time of our study. The reviews of the submissions of the United Kingdom and the United States found they were largely complete and noted only relatively minor problems. For example, the reviews of the two nations’ 2000 submissions noted that neither submission included information on quality assurance procedures. Although the good practice guidance calls for including such information in the national inventory report, the nations were encouraged, but not required, to follow the good practice guidance for the 2000 submissions. Nonetheless, the experts included the lack of quality assurance documentation as a finding of the reviews. Because the problems noted were relatively minor, the suggestions for improving future submissions constituted refinements rather than recommendations for large-scale changes. For example, the experts’ report on the 2000 U.K. submission suggested archiving the documentation supporting the national inventory report in one location or on the Web. Similarly, the report on the desk review of the 2000 U.S. submission suggested that more details on the methods and factors used to estimate emissions for the land-use change and forestry sector would allow more complete assessment of that sector’s data. In contrast, the reviews of the German and Japanese submissions found them to be missing some critical components, and the experts’ reports made suggestions for improvement that were fundamental in nature. For example, the review of Germany’s 2001 submission found it contained only summary-level and trend data; it did not include any of the sector-specific data tables or recalculations of prior years’ data called for by the common reporting format. Furthermore, the national inventory report was missing, so the reviewers could not determine whether problems noted in previous inventories had been addressed. Although the review of the Japanese 2000 submission found most of the data required by the common reporting format was included, like the German submission, this one lacked the national inventory report. As a result of these shortcomings, the experts suggested that Germany submit a complete set of data for all of the required years and sectors and that both nations submit the national inventory report. Additional details on the findings of the six expert reviews are contained in appendix I. Although none of the four Annex I nations’ latest submissions—for 2003— had undergone an expert review as of November 2003, Secretariat staff had performed initial completeness checks on each of them. They found that all four nations’ submissions contained most of the required data as well as the required national inventory reports. The Secretariat has not assessed any inventories from China and India because, as of November 2003, neither nation had submitted one. The Secretariat assessed Mexico’s 2001 submission, but the Secretariat’s practice is to issue one report on the findings of its assessments of all the inventories submitted during the year, with few nation-specific details. Therefore, the Secretariat made public little information about the results of its assessments that could be directly tied to Mexico. According to the Secretariat, China and India are preparing their initial inventories, to be submitted as part of their first national communications. Under article 12, paragraph 5, of the Framework Convention, non-Annex I nations’ first inventories are due to the Secretariat “within three years of the entry into force of the Framework Convention or of the availability of financial resources” from the developed nations in Annex I. According to the Secretariat, funding was approved for China in May 2000 and for India in December 1999, and the first disbursements of funds took place in November 2001 for China and in July 2001 for India. According to the Secretariat, the due dates for their first greenhouse gas inventories are no later than November 2004 for China and July 2004 for India. Mexico submitted inventories in 1997 and 2001. Although 106 developing nations had submitted their initial inventories as of November 2003, Mexico is the only nation to have submitted more than one. Secretariat staff assessed Mexico’s 2001 inventory, along with those of 51 other non- Annex I nations that submitted inventories that year. In keeping with its practice of reporting on its assessments of non-Annex I nations’ inventories as a group, the report for 2001 contained only limited details that could be linked specifically to Mexico’s inventory. In particular, the Secretariat reported that Mexico had improved its estimates of emissions from the energy, agriculture, and land-use change and forestry sectors. It also reported that Mexico could further improve its inventory by establishing systematic procedures for preparing the inventory annually and by including estimates for the solvent-use sector. Otherwise, the Secretariat reported only generally on the results of the assessments of submissions of the 52 non-Annex I nations’ inventories. Mexico’s 2001 submission contained estimates for 1994, 1996, and 1998. According to an EPA official who is knowledgeable about Mexico’s inventory, the 2001 Mexico inventory is of reasonably high quality, especially considering the limited resources Mexico has dedicated to developing it. According to its submission, Mexico followed the IPCC estimating guidelines and good practice guidance in preparing the inventory. The EPA official further commented that Mexico’s 2001 submission is among the best of those of the developing nations, and in some cases—for example, in presentation of its carbon dioxide emissions data—is equal to those of some developed nations. On the other hand, according to that official, Mexico did not (1) comply with the IPCC estimating guidelines in developing the land-use change and forestry sector data, (2) adequately estimate data for the three synthetic gases, or (3) provide adequate documentation explaining the inventory. Furthermore, Mexico developed its two inventories independent of each other, without establishing a process that would systematically make documentation and data additions and revisions as needed. Consequently, in the opinion of the EPA official, it was difficult for Mexico to build upon its previous efforts when preparing its second inventory. As required for the 2003 submissions, the four developed nations categorized their confidence in their emissions data as either high, medium, or low. All four nations reported their confidence in the data as generally high. To improve the usefulness of nations’ assessments of data confidence, however, beginning with the 2004 submissions, developed nations must quantify their confidence assessments. As previously explained, the parties to the Framework Convention have constructed an extensive system of estimating and reporting requirements, buttressed by periodic reviews, to help nations produce inventory data that are of high quality. The parties do not attempt, on the basis of the reviews or any other means, to assign a grade or otherwise rate any nation’s success in producing high-quality data. However, as one means of helping developed nations identify areas where their data can be strengthened, the parties require each nation to assess its confidence in the accuracy of its own data. Specifically, the nations are required annually to analyze the quality of the data they report (called an uncertainty analysis) for each gas and for each major source of emissions and removals in each of the six sectors. To do this, the nations have been encouraged, but not required, to use the quantitative methods of uncertainty analysis included in the IPCC good practice guidance. Alternatively, they could rely on qualitative means to determine their confidence in these data. In either case, they have been required to report whether they had high, medium, or low confidence in each estimate of emissions of each of the six gases by each major source of those emissions. The nations have not been required to report on their confidence in the accuracy of the inventory data as a whole. The parties did not provide further criteria for nations to use when determining which of the three categories was most appropriate. As required, all four developed nations reported high, medium, or low ratings of confidence in their estimates for their 2001 emissions by source. To determine the confidence each nation had in its inventory data as a whole, we calculated the proportion of each nation’s data that corresponded to each of the three rating categories. According to our calculations, all four nations rated their confidence in their inventory data as a whole as generally high, with the high-confidence ratings ranging from about 75 percent for the United States to about 96 percent for Japan. The high-confidence ratings occurred largely because the lion’s share of each nation’s total emissions is carbon dioxide from fuel combustion, which can be estimated with a relatively high level of confidence. Table 2 shows each nation’s ratings for total emissions by gigagrams of carbon dioxide equivalent, which is the unit of measurement used by the parties to the Framework Convention to allow comparisons among greenhouse gases, which differ in their effects on the atmosphere and expected lifetimes. Although the national inventory reports contained some information about the nations’ confidence in their data, none of the nations explained the criteria they used to determine the high-, medium-, and low-confidence ratings they reported. In November 2002, the parties decided to require developed nations to use the quantitative methods in the IPCC good practice guidance to develop estimates of data uncertainty beginning with the 2004 submissions. Instead of designating high, medium, or low ratings of confidence, under the new requirements, developed nations must quantify their uncertainty in their emissions estimates for each gas by each major source using 95 percent confidence levels. In addition, they must combine the source uncertainty estimates into a quantified uncertainty estimate for the inventory as a whole and estimate the uncertainty in the trend between the base year and the most recent year. The IPCC good practice guidance provides detailed instructions for nations to follow to produce the quantitative estimates of data uncertainty. The guidance also describes two methods for combining quantitative uncertainty estimates—one consisting of relatively simple statistical calculations that result in a numerical uncertainty estimate, and the other using computer simulation to calculate the estimates. The computer simulation is a more sophisticated method and should result in more accurate estimates; however, according to the EPA official responsible for compiling the U.S. inventory, the computer simulation also is more costly than the simpler method. Because of this, the good practice guidance states that the nations must use the simpler of the two methods to produce their combined uncertainty estimates; in addition, they are encouraged to use the more sophisticated method when sufficient resources and expertise are available. For example, in its 2003 inventory submission, the United Kingdom used both methods from the good practice guidance to quantitatively estimate its confidence in its 2001 emissions data as a whole. Using the simpler method, the United Kingdom reported an uncertainty value of 17 percent for its inventory data as a whole; that is, the United Kingdom was 95 percent confident that total emissions were between 17 percent less and 17 percent more than the total of about 660,452 gigagrams of carbon dioxide equivalent it estimated for the year. In contrast, using the more sophisticated method, the United Kingdom reported an uncertainty value of 13 percent, indicating it was 95 percent confident that total emissions were between 13 percent less and 13 percent more than the year’s total estimate. According to the EPA official responsible for compiling the 2003 U.S. inventory, the high, medium, and low categorizations reflect the early days of developing inventories, before the IPCC had developed its good practice guidance on quantitative methods. Prior to the guidance, the parties recognized that nations would vary in their ability to perform quantitative uncertainty analysis. The parties instituted the three-part categorization in an effort to obtain information that was comparable across nations that were using different methods for assessing data uncertainty. The parties have moved to the quantitative methods because the three-part categorization approach yielded limited information about data uncertainty. For example, a nation could have uncertainty estimates of 35 percent and 60 percent but could have categorized both estimates as medium. The quantitative estimates provide information about the uncertainty of the various components of the inventory, thereby helping nations identify areas in which improvements would have the greatest effect on the accuracy of the inventory as a whole. In addition, the quantified estimates make the uncertainty analyses more consistent and understandable across nations. According to the Secretariat, the quantified uncertainty analysis also better enables expert reviewers to determine if nations are targeting their improvements in the appropriate areas. To improve the quality of data on greenhouse gas emissions, the parties to the Framework Convention are refining their requirements for both Annex I and non-Annex I nations. In addition, they are bolstering the review processes for Annex I nations. The changes are to begin to take effect over the next few years. The parties currently have no plans to change the way that non-Annex I nations’ inventories are assessed. The parties have revised their requirements for both Annex I and non- Annex I nations, with the changes taking effect over the next few years. The revisions fall mainly into two areas: procedures for estimating emissions and procedures for reporting those estimates. The parties have revised both the estimating and reporting requirements for Annex I nations. Regarding estimating, for example, beginning with the 2004 submissions, Annex I nations will be required to use both the 1996 IPCC estimating guidelines and the 2000 IPCC good practice guidance. Previously, Annex I nations were required to use only the 1996 estimating guidance and were encouraged, but not required, to use the good practice guidance. Regarding reporting, the parties have specified in greater detail than before the information that should be included in Annex I nations’ national inventory reports and in the data tables in the common reporting format. For example, nations should include explanations of how they recalculated their previous years’ data and, as previously discussed, the methods they used to quantify their confidence in the data in their national inventory reports. In their reports, nations should document that they prepared their estimates in accordance with the IPCC good practice guidance or explain why they did not; for example, an explanation is required if they used a more sophisticated methodology than that specified in the guidance. The nations should also cross-reference the information in the national inventory report to explain the estimates reported in the data tables. Furthermore, Annex I nations must submit their national inventory reports following a specified format designed to facilitate review of the inventories. The parties also revised the reporting requirements for non-Annex I nations that submit inventories in 2003 or later. Non-Annex I nations that had not submitted an inventory prior to 2003 must include data in their initial inventories for either 1990 or 1994 to establish an inventory baseline. Those submitting their second inventories should provide data for 2000 as well. This is in contrast to the requirement that Annex I nations submit data for all years, from 1990 to the present. Similarly, the parties specified that non- Annex I nations should report data for carbon dioxide, methane, and nitrous oxide and encouraged reporting of the other three gases— hydrofluorocarbons, perfluorocarbons, and sulfur hexafluoride. In contrast, Annex I nations are required to report data for all six gases. According to the manager of the 2003 U.S. inventory, the estimating and reporting requirements for non-Annex I nations are less demanding to encourage those nations to report because those nations generally have fewer resources available for reporting. In addition, the parties have requested that the IPCC continue to improve its guidance on estimating. Currently, the good practice guidance does not address estimating emissions and removals for the land-use change and forestry sector. According to the EPA official who managed the 2003 U.S. inventory, the IPCC deferred guidance on estimating emissions and removals because it was developing a special report on them, which was subsequently published in 2000. On the basis of that report, the IPCC began drafting new good practice guidance for estimating emissions and removals for the land-use change and forestry sector, which is due to be completed in late 2003. As part of this effort, the IPCC is also refining the data tables for the land-use change and forestry sector. In addition, according to the same EPA official, the IPCC is merging the 1996 guidelines with its good practice guidance and expects to complete the effort by 2007. The parties are strengthening the expert review process for Annex I nations’ submissions by conducting more reviews and standardizing the review processes. Beginning with the 2003 submissions, each of the 39 Annex I nations will undergo one of the three types of expert reviews each year: an in-country review once every 5 years and either a desk review or a centralized review in each of the intervening years. This requirement contrasts with the practices of the past 3 years, when the experts performed from 8 to 21 expert reviews in a year. Furthermore, to standardize the reviews, the parties have spelled out, in greater detail than before, the elements that are to be examined during reviews and have developed a standardized format for reporting the results of the reviews. In addition, according to EPA inventory managers, in another effort to make the expert reviews more uniform, the Secretariat is developing a handbook and a training program for the expert reviewers and has specified the composition and responsibilities of the teams of expert reviewers. According to the Secretariat, the parties have no plans to change the assessment process for non-Annex I nations’ inventories, but the new reporting guidance for non-Annex I nations would facilitate changes to the assessment process, should the parties decide to institute them. To examine the results of the most recent expert reviews of the greenhouse gas inventories submitted by the four economically developed nations included in our study—Germany, Japan, the United Kingdom, and the United States—we reviewed and analyzed the Secretariat’s status reports showing the results of its initial reviews (called stage 1 reviews by the Secretariat) of the most recently submitted inventories (2003). We also reviewed the reports on the parties’ most recent expert reviews (called in- depth reviews by the Secretariat) of the four nations’ inventories (2000 for Japan, 2000 and 2002 for the United Kingdom, 2000 for the United States, and 2001 for Germany) and related documentation on reporting requirements and review processes issued by the Secretariat. We interviewed officials at EPA who manage the U.S. greenhouse gas inventory and serve as inventory experts for the parties, as well as officials from the State Department’s Bureau of Oceans and International Environmental and Scientific Affairs who are responsible for policy issues related to the Framework Convention. In addition, we reviewed and analyzed the limited information provided to us by the Secretariat in response to questions we posed. To describe the results of any assessments of inventories of the three developing nations included in our study—China, India, and Mexico—we reviewed and analyzed the Secretariat’s reports on its assessments of inventories submitted by non-Annex I nations, including the latest inventory submitted by Mexico (2001); related documentation on non- Annex I nation reporting requirements and assessment processes; and other Secretariat information documenting which non-Annex I nations have submitted inventories. We also interviewed the officials at EPA and the Department of Energy who are most familiar with the three nations’ efforts to compile and report their inventories, as well as the cognizant officials from the State Department. To determine the extent to which the developed nations have confidence in their data, we analyzed the confidence information each nation provided in its 2003 submission. To describe any changes in assessing confidence in the data that are to take effect in the future, we examined documentation from the Secretariat and the relevant sections of the four developed nations’ 2003 submissions. To describe the steps the parties are taking to improve the quality of future inventory data and determine when those improvements might be in place, we reviewed and analyzed documentation of the parties’ new estimating, reporting, and review requirements; interviewed cognizant EPA officials; and reviewed and analyzed the limited information on this issue submitted to us by the Secretariat in response to questions we posed. We performed our work between November 2002 and November 2003 in accordance with generally accepted government auditing standards. We provided a draft of this report to the Secretary of State, the Administrator of EPA, and the Framework Convention Secretariat for review and comment. EPA provided clarifying comments, which we incorporated where appropriate. We did not receive comments from the State Department or the Framework Convention Secretariat. As arranged with your offices, we plan no further distribution of this report until 30 days after the date of this letter, unless you publicly announce its contents earlier. At that time, we will send copies of this report to interested congressional committees; the Chairmen and Ranking Minority Members, Senate Committee on Appropriations, House Committee on Appropriations, Senate Committee on Governmental Affairs, and House Committee on Government Reform; the EPA Administrator; and the Secretary of State. We will make copies available upon request to other interested parties. This report will also be available at no cost on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please call me at (202) 512-3841. I can also be reached at stephensonj@gao.gov. Key contributors to this report are listed in appendix II. The six expert review reports we examined did not follow identical formats; however, they generally highlighted the experts’ findings and suggestions for improvement in a summary section at the beginning of each report. The experts noted instances of noncompliance with the reporting requirements. In addition, the experts noted some instances in which the nations did not comply with the Good Practice Guidance and Uncertainty Management in National Greenhouse Gas Inventories, even though following the good practice guidance was not a requirement at the time that the inventories were submitted. The summary-level findings and suggestions for each of the six expert reviews we examined are listed in table 3. In addition to the individuals named above, Simin Ho and Karla Springer made key contributions to this report. Nancy Crothers, Sandra Edwards, Barbara Johnson, Karen Keegan, Andria Key, Charlotte Moore, Chris Moriarity, Katherine Raheb, and Anne Rhodes-Kline also made important contributions. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | In 1992, the United States and other parties, including both developed and developing nations, agreed to try to limit dangerous human interference with the climate by participating in the United Nations Framework Convention on Climate Change. The parties agreed, among other things, to report on their emissions of carbon dioxide and five other gases whose buildup in the atmosphere is believed to affect the climate. The parties developed standards for these reports and processes for periodically evaluating the reports. Expert teams selected by the parties review the developed nations' reports; staff of the Framework Convention's administrative arm (the Secretariat) assess developing nations' reports. GAO agreed to describe the results of the most recent reviews and assessments of reports from selected economically developed and developing nations, as well as the parties' plans to improve the reports. For the developed nations, GAO agreed to study four geographically dispersed nations with high levels of emissions--Germany, Japan, the United Kingdom, and the United States. For the developing nations, GAO studied China, India, and Mexico, which also have high emissions levels and are geographically dispersed. These nations are not representative of others; therefore, GAO's findings cannot be generalized. In their most recent reviews, expert teams found that the United Kingdom's 2000 and 2002 reports on greenhouse gas emissions and the United States's 2000 report were largely complete, although the teams noted minor findings, such as the lack of information on quality assurance methods, which the nations were encouraged, but not required, to include in their submissions. In contrast, they found that Germany's 2001 and Japan's 2000 reports lacked critical elements, such as the required documentation that was essential to understanding them. Preliminary checks found that all four nations' 2003 reports were largely complete. Secretariat staff have not assessed inventories from China and India because these nations have not submitted them. According to Secretariat records, China and India plan to submit inventories in February 2004 and November 2003, respectively. Secretariat staff assessed Mexico's most recent inventory, but they reported few details about it because their policy is to consolidate the findings of all the developing nations' inventories submitted during a year. To improve the inventories, the parties are changing the reporting standards and review process. For example, starting in 2004, developed nations must present their inventory reports in a standardized format to facilitate review, and developing nations must report data for more years and gases than before. Also, in 2003, the parties began conducting more rigorous reviews of developed nations' inventories, but no such changes for developing nations are planned. |
The National Flood Insurance Act of 1968 created NFIP. According to FEMA, NFIP was designed to address a number of policy objectives, including offering affordable insurance premiums to encourage program participation and community-based floodplain management and reducing the reliance on federal disaster assistance. The act provided the federal government with the authority to work with the private insurance industry, and since its inception NFIP has largely relied on the private insurance industry to sell and service flood policies. In 1983, FEMA established the WYO program with the goals of increasing the NFIP policy base and geographic distribution, improving service to policyholders, and providing the insurance industry with direct operating experience with flood insurance. FEMA also sells and services flood insurance through the DSA, which a contractor operates. Private insurers become WYO companies by signing a Financial Assistance/Subsidy Arrangement with FEMA under which the insurers agree to issue flood policies in their own name, adjust flood claims, and settle and defend all claims arising from the flood policies. Private insurers must meet FEMA’s established criteria for becoming a WYO company. Requirements for a company to participate in the WYO program include, among others, 5 years of experience in property and casualty insurance lines, good standing with state insurance departments, and the ability to meet NFIP reporting requirements to adequately sell and service flood insurance policies. Each year, FEMA publishes in the Federal Register the terms for participation in the WYO program, including amounts WYO companies will be paid to sell and service flood policies and adjust and pay claims. The compensation FEMA pays WYO companies is one factor it considers in setting premium rates for flood policies. This Federal Register notice also states that WYO companies are to comply with the provisions of NFIP’s WYO Financial Control Plan Requirements and Procedures (Financial Control Plan). The Financial Control Plan outlines WYO companies’ responsibilities for program operations, including underwriting, claim adjustments, cash management, and financial reporting, as well as FEMA’s responsibilities for management and oversight. WYO companies employ, contract, or work with other parties to sell and issue flood policies and receive, process, and pay claims. Insurance agents for one or more WYO companies are the main point of contact for most policyholders seeking to purchase an NFIP policy, find information on coverage, or file a claim. Based on information the insurance agents submit, the WYO companies issue policies, collect premiums from policyholders, deduct an allowance for expenses from the premium, and remit the balance to the National Flood Insurance Fund—into which premiums are deposited and from which claims and expenses are paid. WYO companies typically contract with flood insurance vendors to conduct some or all of the day-to-day processing and management of flood insurance policies. WYO companies work with certified flood adjusters to settle NFIP claims. When flood losses occur, policyholders report them to their insurance agent, who notifies the WYO company. To assess damages, the WYO company assigns a flood adjuster, who may be independent or employed by an insurance or adjusting company. The adjuster is responsible for assessing damage; estimating losses; and submitting required reports, work sheets, and photographs to the WYO company, where the claim is reviewed and, if approved, processed for payment. FEMA reimburses the WYO company from the National Flood Insurance Fund for the amount of the claims and expenses paid. Claim amounts may be adjusted after the initial settlement is paid if claimants submit documentation that costs were different than estimated. Current WYO compensation is structured primarily as allowances to pay for policy sales and servicing, claims adjusting and processing, and other services FEMA requires that participating companies provide. This service-oriented compensation structure, with uniform rates generally based on insurance industry average expense ratios (proxies) and fee schedules, allows WYO companies to earn a profit to the degree that compensation exceeds their actual expenses. Most of FEMA’s payments to WYO companies under the current compensation structure are not reimbursements of actual expenses incurred, but allowances on which the companies can either make a profit or incur a loss. Since the inception of the WYO program, FEMA has generally used proxies to determine the rates at which it pays WYO companies, and the payments FEMA makes are determined by applying these proxy rates to either premiums written or claim losses (see table 1). Commission and operating expenses are based on a proxy of a WYO company’s net written premiums. FEMA established a commission expense allowance at 15 percent in 1983 after consulting with industry representatives. This percentage has not changed since and is written into the Financial Assistance/Subsidy Arrangement. The percentage used for calculating operating expenses is generally provided annually to WYO companies as part of the compensation package (see table 1). The percentage is determined annually based on A.M. Best Company’s aggregates and average industry operating expenses for five lines of property insurance—fire, allied lines, farm owners multiple peril, homeowners multiple peril, and commercial multiple peril. Further, WYO companies receive payment for three types of claim adjustment expenses. Allocated loss adjustment expenses (ALAE). These are claim expenses to adjust specific claims. FEMA determines payment for ALAE based on information it periodically collects from independent adjusting firms on the cost of adjusting losses in other lines of insurance business, and presents the payment amount to WYO companies through a fee schedule. Unallocated loss adjustment expenses (ULAE). These are claim expenses that are incurred by the WYO company for routine operations not associated with a specific claim such as salaries, overhead, and maintenance. FEMA bases payment for ULAE on a percentage of net written premiums and a percentage of claim losses. Before May 2008, FEMA calculated the amount for ULAE as 3.3 percent of claim losses but changed its methodology to 1.5 percent of claim losses plus 1 percent of net written premium, which was further reduced to 0.9 percent in fiscal year 2013. According to FEMA’s statements in the Federal Register, the flat rate of 3.3 percent of claim losses resulted in payments far greater than expenses during catastrophic loss years and payments below actual expenses during low-loss years. Special allocated loss adjustment expenses (SALAE). These are claim expenses related to litigation, engineering, appraisals, other experts, and additional claim adjustments. FEMA calculates SALAE based on actual expenses. In March 2015, FEMA eliminated the previous $2,500 approval threshold for SALAE expenses for experts (Type 1) and required WYO companies to submit specific information to FEMA, including information on the claim, policy limits, and an explanation and justification for the reimbursement. FEMA staff must review the information submitted and approve the expenditure before the WYO is allowed to incur any Type 1 expenses. In July 2016, FEMA removed the $5,000 threshold for Type 3 expenses (litigation- related) and required WYO companies to seek approval for reimbursement of such expenses and pre-approval if they wished to take more than three depositions in a case. In addition, FEMA pays WYO companies that meet certain policy growth goals a percentage of net written premiums as a marketing bonus. In 2009, we found that FEMA’s marketing goals were not aligned with FEMA’s NFIP goals. As a result, FEMA changed the formula for how WYO companies earn bonuses in the fiscal year 2013 compensation arrangement. A growth bonus is intended to provide an incentive for WYO companies to continue to grow the NFIP program by adding new policies. FEMA officials told us that the agency changed the program’s growth bonus to better link it to new business—if a WYO company acquires another company’s business, the number of transferred policies is added to the company’s beginning number of policies and the total merged number of policies is used when calculating aggregate growth for the purposes of the bonus. Therefore, the WYO company is receiving a growth bonus based only on new business since the number of transferred policies is added to the existing policies in place before the percentage of growth is calculated. This allows FEMA to recognize WYO companies for actual growth and not for transferring policies from one company to another. With the new formula, WYO companies can receive a higher percentage of net written premiums as a growth bonus when the policy growth is tied to three supporting goals for selling policies: (1) in underserved areas, (2) for residential preferred risk policies, and (3) for nonresidential policies. Within FEMA, the Federal Insurance and Mitigation Administration (FIMA) manages NFIP. According to FEMA staff, about 70 staff within FIMA are dedicated to managing and overseeing the WYO program and claim processes. Their management responsibilities include establishing and updating NFIP regulations, analyzing data to actuarially determine flood insurance rates, and offering workshops and conferences to insurance agents and adjusters to explain NFIP requirements. In addition, FEMA is responsible for monitoring and overseeing the performance of the WYO companies to ensure that NFIP is administered properly. FEMA has processes for monitoring and providing oversight of NFIP claims that are outlined in its Financial Control Plan. Under the current plan, the processes include triennial claims operation reviews, biennial financial statement audits, and underwriting reviews. The agency also is responsible for reinspecting claims and monitoring company performance as needed. Claims operation reviews. FEMA is to conduct these reviews at every WYO company on a 3-year rotating basis, according to the Financial Control Plan. The stated purpose of these reviews is to evaluate a WYO company’s processes for administering flood claims, NFIP data reporting, and the accuracy and service the company provides customers when handling claims. As part of the review process, FEMA officials are to review the entire claim file including coverage, policy compliance, and whether coverage limits are within NFIP statutory allowances. FEMA notes findings as critical and noncritical errors. Improper payment reviews. DHS is required to conduct annual reviews by the Improper Payments Information Act (IPIA), as amended. Such reviews identify a statistically valid sample of payments done annually to estimate the percentage of improper payments. Reinspection of claims. While the claims operation review is meant to focus on transactions at WYO companies or groups of WYO companies selected by FEMA for review, the selection for reinspection of claims is to be based on specific events or large losses. Until 2015, all claim files were subject to FEMA’s reinspection process outlined in the Financial Control Plan, which included routine reinspections as well as special assist reinspections, which are inspections of claims requested by Congress, a policyholder, a WYO company, or the DSA. Starting in 2015, FEMA discontinued routine reinspections but continued special assist reinspections. Biennial audits. According to the Financial Control Plan, the biennial audit is to provide an independent assessment of a WYO company’s financial controls relating to its participation in NFIP and the integrity of the financial data it reports to FEMA. The audits provide an opinion on the fairness of a WYO company’s financial statements, the adequacy of its internal controls, and the extent of its compliance with relevant laws and regulations, including reporting any discrepancies found in the claims process. Audits for cause. According to WYO Financial Control Plan Monitoring procedures, FEMA can conduct these audits as a last resort if other remedies in its oversight of WYO companies have been exhausted, or at the request of OIG. The monitoring procedures also state that there have been fewer than five such audits during the program’s history. Insurance is primarily regulated by the states, unless federal law specifically relates to the business of insurance (as in the cases of flood and terrorism insurance). Requirements and processes for regulating insurance may vary from state to state, but state regulators generally license insurance companies and agents, review insurance products and premium rates, and examine insurers’ financial solvency and market conduct. According to NAIC, state regulators monitor an insurers’ compliance with laws and regulations and a company’s financial condition through solvency surveillance and examination mechanisms. Insurance regulators use insurance companies’ financial statements and other information as part of their continuous financial analysis, which is to be performed at least quarterly, to identify issues that could affect solvency. Through NAIC, the regulators also collect financial information from insurers for ongoing monitoring of financial solvency, including information on their federal flood line of insurance. NAIC’s statutory accounting principles prescribe standards for insurer accounting and reporting of financial information, which are intended to, among other things, ensure the consistent reporting of financial information. NAIC also issues instructions for completing annual statements and related schedules and exhibits, including the Insurance Expense Exhibit, which provides premium, loss, expense, reserve, and profit data for each line of property and casualty business, including the federal flood line of insurance, and is presented both for the direct insurance written by insurers and net of reinsurance. The exhibit provides a statutory allocation of income to lines of business and may be used to measure underlying profitability of insurance operations. Each WYO company determines its own method for allocating revenues and expenses, which may vary from company to company. WYO companies have been reporting this information to NAIC annually since 1997. FEMA’s National Flood Insurance Program Write-Your-Own Accounting Procedures Manual prescribes the financial reporting requirements for all WYO companies. This manual is part of the NFIP WYO Program Financial Control Plan, which also includes transaction record reporting and reconciliation procedures. These procedures describe, among other things, expectations for the timeliness of reporting and elements of the quality review that FEMA performs on submitted data. As previously discussed, FEMA’s DSA serves as the insurer of last resort when a WYO company is unable or unwilling to write a flood insurance policy. Through the Direct Program, the DSA services both standard policies and other types of policies, including repetitive loss and group flood policies. According to FEMA officials, as of August 2016, the DSA administers 15 percent of FEMA flood insurance policies. FEMA pays the DSA contractor for selling and servicing flood insurance and for adjusting and processing claims after a flood event through a competitively awarded predominantly fixed-price contract. The contractor has calculated its cost to sell and service policies as well as adjust claims following a noncatastrophic event based on its prior experience as a vendor for WYO companies. Based on this experience, the contractor charges a flat price per policy type that is not based on the premium amount. The DSA contractor also has the ability to withdraw funds on behalf of the agency from the Department of the Treasury to pay for certain actual costs, such as overhead costs for mailing and printing. FEMA oversees the DSA contractor and conducts operation reviews on the DSA’s underwriting and claims operations annually versus triennially for the WYO companies. According to FEMA officials, DSA financial- related information is subjected to the annual audit of the Department of Homeland Security’s consolidated financial statements that an independent certified public accountant performs. FEMA continues to lack the information it needs to determine whether its compensation payments are appropriate and how much profit is included in what it pays WYO companies. Efforts by FEMA, NAIC, and WYO companies have resulted in some improvements to federal flood financial data reported to NAIC. But we found inconsistencies in how companies reported federal flood data to NAIC, which limits the usefulness of the data for setting compensation rates. Our analysis also shows that the manner in which WYO companies operate has an effect on their expenses and profits, which FEMA may find relevant when developing a WYO compensation methodology and rates. However, FEMA has made limited progress toward revising its WYO compensation methodology as required by the Biggert-Waters Act. Efforts by FEMA, NAIC, and WYO companies have resulted in some improvements to federal flood financial data reported to NAIC that are critical to a revised compensation methodology. FEMA officials told us that since our 2009 report they have worked with NAIC and WYO companies to help ensure that reasonable and accurate operating expenses for the federal flood insurance line are being reported to NAIC. In addition, FEMA officials told us that FEMA has analyzed WYO company financial data since 2009 to monitor improvements in the companies’ federal flood data, but has found mixed results. A FEMA official told us that after issuance of our 2009 report, the agency conducted site visits to four WYO companies to review the actual flood insurance data the companies submitted to NAIC. The official said that they found the visits helpful in understanding how the companies were reporting the financial results of their flood insurance lines. However, this official explained that it would require too many resources to meet with all the WYO companies individually and FEMA has not made any further company-specific inquiries or visits. As a result of these initial efforts, NAIC amended its guidance in 2011 on the reporting of WYO commission and fee allowances in response to FEMA’s request. According to FEMA, this change was intended to address one issue we found during our 2009 engagement–-specifically, that WYO companies were subtracting (netting) WYO compensation from expenses. Reporting expenses net of compensation instead of reporting expenses gross for the flood insurance they wrote resulted in higher calculated profits. We found that 2 of the 10 companies whose data we analyzed for this report changed from net to gross expense reporting in the first year in which the NAIC guidance was effective or at some point before 2014, and 7 WYO companies reported expenses gross, not net, during the 2008–2014 period. Only one company from those we selected to review continued to report a portion of its expenses net of compensation in 2014. In addition, WYO companies have made other improvements to the federal flood insurance financial data they report to NAIC beyond reporting expenses. For example, our analysis confirmed that one WYO company revisited how certain expenses for servicing flood policies were allocated and reported to NAIC. Two other companies made changes in how they report losses—one changed its method for estimating losses reported to NAIC to be consistent with the method it used to report such losses to FEMA, while another said it changed its policy of reporting certain loss adjustment expense reimbursements as an offset to incurred losses reported to NAIC. These reporting changes collectively improved the quality of the NAIC financial data necessary to ensure comparability with financial data the WYO companies submit to FEMA, which is important to determining the amounts to be built into compensation rates for estimated expenses and profits. To verify the accuracy of the NAIC data, FEMA officials told us that they request and analyze the federal flood insurance data that WYOs report to NAIC around April or May of each year. FEMA officials explained that the benefit of using NAIC data is that the data are reported on WYO companies’ annual statements and it is more cost efficient to get all the data from one source than for FEMA to independently collect and verify the data from each WYO company. WYO companies’ financial statements are submitted to both NAIC and state regulators. A FEMA official told us that its analysis, which it has periodically performed since 2009, has included comparing each WYO company’s premiums and losses reported to NAIC to the figures the companies report on the financial statements they submit to FEMA. FEMA has also compared the aggregate homeowners’ underwriting and loss adjustment expense ratios of the WYO companies to non-WYO companies. FEMA officials told us that for the largest 8 to 10 WYO companies, FEMA has also compared their underwriting and loss adjustment expense ratios (expenses expressed as a percentage of written premiums) for flood to the same ratios for their homeowners lines for a 5-year period to determine if a correlation exists between the companies’ costs of operating these two lines of business. FEMA has prepared a report showing underwriting expenses and loss adjustment expenses grouped into various ranges, such as negative expense ratios, “0-5 percent”, and “5-10 percent” to assess the trend in WYO company expenses over time. In September 2016, FEMA officials estimated that WYO companies that make up about 80 percent of the net written premiums reported had adequately improved the quality of the underwriting expense data reported to NAIC. FEMA officials also said that these companies usually have underwriting expenses between 20 percent and 30 percent of their net written premium or an average expense ratio of 25 percent. By using that information as a model and excluding WYO companies with expenses that fall outside that range, FEMA officials stated that they may be able to use data from these companies to set future commission and operating expense allowances for all WYO companies. However, FEMA officials noted that the loss adjustment expense ratios varied much more significantly from company to company than did underwriting expense ratios. They also observed negative loss adjustment expense ratios for some WYO companies, although they added that such ratios can occur as a result of changes in loss reserve estimates. FEMA officials also said that they generally found great inconsistency in how WYOs were reporting expenses between two categories of loss adjustment expenses, which affected their ability to assess the reasonableness of the expense ratios of a single year, but had greater success in doing so when the ratios were calculated based on total loss adjustment expenses for a 5- year period. According to an August 2016 WYO Bulletin, beginning with the fiscal year 2017 arrangement year, FEMA intends to require that WYO companies provide to FEMA copies of all data submissions to NAIC related to their flood insurance activities and to attest to the accuracy of those submissions. FEMA stated in the bulletin that this requirement would be aligned with the arrangement’s specification that, upon request, WYO companies supply FEMA with a true and correct copy of a WYO company’s property and casualty annual financial statement filed with state insurance regulatory agencies and the arrangement’s requirement that provides access to all records of WYO companies pertinent to the arrangement. FEMA also stated in the bulletin that this requirement will support FEMA’s efforts to pay WYO companies based on actual expenses incurred by companies. We found that WYO companies were not consistently reporting their federal flood financial data to NAIC. The inconsistencies we found in the data WYO companies reported to NAIC resulted in unreported underwriting and loss adjustment expenses of varying amounts and significance by 8 of the 10 companies we reviewed. Further, we found that some WYO companies reported different loss and related reserves to NAIC and FEMA. More than half of the companies we reviewed did not report to NAIC all of their adjuster fees and other expenses incurred on the companies’ flood losses and provided a variety of explanations for their accounting practices. Nearly all of the WYO companies we reviewed told us that they reported adjuster fees as a direct expense of the flood insurance line, but one WYO company told us that their interpretation of the NAIC rules was that adjuster fees should be reported as an expense ceded to FEMA and, thus, not reported as a direct expense to its flood line. Similarly, four WYO companies told us that they did not record reimbursable legal, engineering, appraisal, and other adjuster fees as direct expenses of their flood lines because, among other reasons, some viewed these as FEMA’s expenses and not the company’s, although this was not the practice for the remaining companies we reviewed. Still another WYO company told us that it reports policy or claim-specific expenses to the flood line, but does not report indirect expenses, such as claim handling fees paid to their vendor. Collectively, based on our analysis, these unreported loss adjustment expenses amounted to about $14 million. Also, some companies did not report certain related operating expenses for their federal flood line. These expenses included fees paid to flood vendors, premium taxes, and internal company overhead expenses that would normally be classified as a type of underwriting expense. However, due to the WYO companies’ established practices at the time and their interpretation of NAIC’s rules, these expenses were either not allocated to the federal flood line or were reported on the books of an affiliated company. Collectively, based on our analysis, these unreported operating expenses amounted to approximately $52 million. As discussed below, these unreported expenses had a significant effect on the combined profits of these companies. The inconsistencies we found in how premiums are reported to FEMA and NAIC had little effect on individual company profit calculations. For nearly all companies we reviewed, differences in premiums WYO companies reported to FEMA and NAIC in 2014 were negligible (less than plus or minus 1 percent) and had a negligible effect on reported commission and underwriting expenses, and profit. Any differences that existed were generally attributable to timing differences—linked to a lag in WYO companies receiving financial data from vendors that, in turn, affected the companies’ reporting to NAIC. Also, some of the differences we identified in incurred losses reported to FEMA and NAIC were also due to this reporting lag, but as the timing of floods and the payment of claim losses are less predictable than premium payments, the lag in reporting had a more significant effect on reported losses and loss adjustment expenses. For example, we determined the effect of the lag for one company was about 5 percent of incurred losses reported to FEMA, whereas for another company the effect was greater than 25 percent of incurred losses. However, the inconsistency with the greatest effect on individual company-reported losses was related to how certain companies estimated incurred but not reported losses and related adjustment expenses. Three of the WYO companies told us that the actuarial methodology they used to develop incurred but not reported loss estimates for NAIC reporting purposes was different than the methodology their vendor used to develop the estimates submitted to FEMA. Another company told us it accounted for its federal flood activity entirely on a cash basis and did not, therefore, report any unpaid loss and loss adjustment expense reserves to NAIC. In order to compare loss adjustment compensation with actual expenses, we adjusted these companies’ reported expenses to remove the effect of these differences and substituted expense estimates we developed based on the loss and loss adjustment expense reserves the companies’ vendors reported to FEMA. Collectively, the net effect of our adjusting for these differences in reported losses and loss reserves (some companies reported significantly higher losses and loss reserves to FEMA than to NAIC while others reported significantly lower estimates) was a net increase in reported loss adjustment expenses of more than $5 million. These adjustments are reflected below along with the unreported expenses noted above. We performed additional analyses and comparisons for 10 selected WYO companies to adjust for inconsistencies (discussed previously) and determine the effect of the revised amounts on expenses and profits. The 10 companies we selected accounted for a majority of net written premiums, net paid losses, and total compensation paid for calendar year 2014 (see table 2). For more details about our methodology and the limitations of our analysis, see appendix I. An initial comparison of selected WYO company compensation with the expenses the companies reported to NAIC appears to show that the companies collectively earned a profit of 25 percent in 2014 (as illustrated in table 3). However, after we adjusted the reported expenses for the effects of inconsistent reporting described previously, we estimated that the companies earned a profit of approximately 15 percent on the flood insurance line (see table 3). While an aggregate measurement of profitability for all selected WYO companies can be calculated, this calculation is significantly influenced by a few WYO companies that dominate the flood insurance market and whose business model and cost structure may be different from that of the majority of insurers that participate in the WYO program. The 2014 flood insurance profits of the companies we reviewed, after our adjustments, ranged from approximately 2 percent to 38 percent. Removal of the two WYO companies that represent the outliers of this range would result in total profit of approximately 18 percent and a profit range that still varies significantly between 7 percent and 28 percent for the remaining eight companies. Importantly, our analysis and ability to estimate WYO company expenses and profit were subject to certain limitations (see app. I for details on these limitations and their potential effects), which included limiting our analysis to 1 year (2014). In addition, our 2014 estimates of company expenses and profit are an outcome of our effort to understand the issues surrounding the inconsistent financial reporting by the selected WYO companies and the various factors that can affect company expenses and profit. For these reasons, these estimates should not be taken to be a static or predictable indicator of WYO company profits. Aside from the inconsistencies in reporting financial data, other factors specific to how WYO companies operate their flood line of business also can affect a company’s expenses and profits. These company-specific factors, coupled with the inherent uncertainty of the frequency and severity of loss events, the overall market for flood insurance, and changes to the flood insurance program’s design and requirements, can present challenges in developing the WYO compensation methodology. Further, these factors can also present challenges in setting rates that appropriately compensate WYO companies over time for providing services to policyholders. Based on our analysis of the costs of operating their flood lines of business in relation to expenses and profits, we found that companies’ operating characteristics could in part explain the significant variance in expenses and profits. One way to understand the amount of WYO company expenses and profits is in relation to the premiums paid by policyholders. As noted in table 2, total compensation paid to the WYO companies we reviewed represented approximately 35 percent of net written premiums. That is, 35 cents of every premium dollar paid by policyholders went toward compensation for the selected companies. As shown in table 4, by breaking down compensation into expense and profit components, slightly more than 5 percent of every dollar of premium written by the 10 WYO companies went to their profit. As demonstrated previously, WYO company expenses and profit vary significantly and those variances can be explained in part by the companies’ operating characteristics. For example, some WYO companies we interviewed told us that they used independent agents and generally paid these agents a commission higher than the 15 percent allowance FEMA provides. Further, the companies we selected for review had commission expenses of 17.7 percent of net written premiums on average (see table 4). Some WYO companies attributed the higher commission to stiff competition in writing new business or keeping current policies in place in certain markets. Agent commissions can vary not only from company to company, but also by volume of sales, across lines of property and casualty insurance, and between new business and renewal of existing policies. However, we did not determine whether agents’ commissions for selling NFIP policies were affected by how insurers compensated agents for selling other lines of property and casualty business. Also, nearly all of the WYO companies we reviewed told us that they pay adjusters the same amount that FEMA provides as an adjuster fee allowance and, thus, do not earn a profit on this category of compensation. The operating expense allowance, including policy growth incentive bonuses, and the ULAE allowance are the remaining categories of compensation on which WYO companies can earn a profit and, thus, offset losses on agent compensation. The operating and ULAE allowances compensate WYO companies for the expenses incurred to operate and administer their flood lines and fulfill the companies’ obligations under their agreements with FEMA, but are not directly associated with selling specific policies or adjusting specific claims. It is in these areas that the companies’ operating characteristics and their compensation of vendors can more directly affect the expenses they incur and the profits they earn on the federal flood line. In addition to premium taxes and fees, these allowances cover such insurer expenses as salaries and benefits of company personnel, printing and postage, advertising, equipment, training and travel, audit and legal services, and other expenses. The expenses are incurred to fulfill company obligations, such as to underwrite and issue policies; collect, remit, and account for funds; submit financial and statistical reports; conduct audits and reviews; and manage all aspects of the claims process. All of the WYO companies we reviewed use vendors to some extent to operate their flood lines. Most of the WYO companies used third-party vendors, while the others used an affiliated company to provide various services. Many of the companies that used third-party vendors told us that they generally outsource policy, claims, reporting, and other functions to their vendor, although some use a vendor’s systems software and retain responsibility for underwriting policies and adjusting claims. Vendors we interviewed said that they offer a variety of service levels that WYO companies can choose from depending on the degree of control they want over the underwriting and claims processes and, thus, the customer service experience of their policyholders and agents. WYO companies and vendors told us vendors are paid a percentage of gross or net written premiums and ULAE allowances and may be paid for additional expenses incurred in providing services above what is provided for in the base contract. Third-party vendors with whom we spoke said that the amount WYO companies pay depends on the nature and extent of the services provided and the volume of premiums and losses. We were not able to obtain information from all WYO companies about how much they pay their vendor, but from the information we were able to obtain from some WYO companies we were able to estimate the amount paid. We noted that the difference between what the WYO companies paid third-party vendors varied by 2 percent or more of net written premium. In addition, we observed that some companies paid up to twice the amount of incurred loss ULAE compensation to their vendors as others. Such differences in vendor compensation can affect WYO company flood line profits. And because vendor compensation is based in large part on FEMA’s allowances or the written premiums and losses on which those allowances are based, changes in those allowances will, absent changes to the vendor contracts, carry through to the vendors. We identified expenses of approximately $80 million in aggregate that three WYO companies paid to their affiliated vendors in 2014; this amount represents approximately 12 percent of total adjusted expenses (see table 3 above). Company representatives told us that the affiliated vendors provided policy administration, claims processing, cash management, reporting, and other services that third-party vendors typically offer and may include additional management, financial, and legal and regulatory services commonly performed by an insurer’s employees. Some companies told us that the fee charged was either intended to cover only the affiliate’s expenses or was equivalent to what they would expect to pay a third-party vendor for the same services. We did not determine the amount of intercompany profits or losses reflected in the expenses these WYO companies reported and one company told us that this information is not made public. Without more specific information on the affiliated vendors’ activities and intercompany profits and losses, it would not be possible to determine how the fees charged by these affiliated vendors compare to what a third-party vendor otherwise would charge in an arms-length transaction. Excluding intercompany profits and losses (or a portion thereof) from expenses would increase or decrease, respectively, the profit shown in tables 3 and 4. In addition to vendor fees, some of the WYO companies whose data we analyzed allocate internal company overhead expenses for corporate- wide support functions to their flood lines. Companies told us that they allocate overhead expenses in accordance with the methods prescribed by NAIC. In some cases companies told us that expenses were allocated based on the results of cost studies for those functions that support the federal flood line or were allocated to each line of property and casualty business in proportion to factors such as head count, salaries, and premiums written or earned. In the cases in which we were able to obtain sufficient information to determine how much of the WYO companies’ expenses were allocated to overhead, we observed that overhead as a percentage of net written premium ranged from less than 1 percent to almost 3 percent. The amount of overhead allocated to the flood line can affect the company’s profit on this line and the variances we observed may reflect the relative significance of the federal flood line to the WYO companies’ total property business and the extent that certain activities are performed by internal WYO company personnel versus their vendors. Aggregate industry average expense ratios and WYO company flood line expenses and profit are both historical in nature and, as such, may not fully account for current conditions and the effects that changes to the flood program’s design and requirements may have on WYO companies’ expenses and profits in the future. The 10 WYO companies whose reporting we reviewed cited a number of factors that they consider when evaluating the WYO arrangement in relation to their financial and strategic goals. Some WYO companies told us that their goals can be met as long as they are able to offer flood insurance as part of a full menu of products that help meet the financial needs of their customers without undue financial and reputational risks being placed on the company. Some companies specifically cited as a concern the mandates imposed by Congress and FEMA as part of recently enacted legislation (Biggert- Waters Act and the Homeowners Flood Insurance Affordability Act of 2014) that the companies said imposed significant unreimbursed costs on them. Some WYO companies also stated that additional fees, assessments, and surcharges imposed by this legislation added to customers’ out-of-pocket costs. According to the WYO companies, these additional costs to consumers resulted in some property owners dropping their flood coverage and leaving the WYO companies with a smaller policy base. FEMA has not yet revised its compensation methodology in response to section 224 of the Biggert-Waters Act or our prior recommendations and continues to rely on insurance industry proxies for other lines of insurance for setting compensation rates (see table 1 for FEMA’s compensation practices). The Biggert-Waters Act built on our 2009 recommendations and required that FEMA take into account actual expenses and determine in advance the amount of profit built into its compensation rates when determining compensation. FEMA officials told us that the agency began the rulemaking process in late 2014 in response to the Biggert- Waters Act requirements, but that its progress had slowed as litigation over Hurricane Sandy claims escalated and more resources were assigned to that issue. As of September 2016, FEMA was unable to provide a timeline for completing its rulemaking required under section 224. One FEMA official explained that it is difficult to determine a timeline for rulemaking since some elements of the process, such as economic analysis and the concurrence process through FEMA and DHS, are beyond the agency’s control. In September 2016, FEMA officials told us that an upcoming regulatory action in response to section 224 of the Biggert-Waters Act would address FEMA’s new methodology for compensating WYO companies, as well as fully address our open recommendation from the 2009 report related to compensation and data quality. However, FEMA has not made clear whether its expense ratio analysis, planned data requests, and WYO company attestations of the accuracy of their financial data (as discussed previously) represent the entirety of the agency’s plan to ensure the accuracy of the data WYOs submit to NAIC. FEMA also has not made clear whether—in light of its own observations on unusual expense ratios and our findings of inconsistent WYO company reporting—it intends to make other inquiries and perform other analyses that will fully address our recommendations. Among the 10 recommendations in the report, we made the following five relating to compensation methodology and data quality that have not been fully addressed: We recommended that FEMA (1) determine in advance the amounts built into the payment rates for estimated expenses and profit; (2) annually analyze actual expenses and profit in relation to the estimated amounts used in setting payment rates; and (3) consider the results of the analysis of payments, actual expenses, and profit in evaluating methods for paying WYO companies. We also recommended that FEMA increase the usefulness of the data WYO companies report to NAIC by (1) taking actions to obtain reasonable assurance that expense data can be considered in setting payment rates and (2) developing data analysis strategies to annually test the quality of flood insurance data the companies report to NAIC. Federal managerial cost accounting standards state that reliable cost information is critical to the proper allocation and stewardship of federal resources and that actual cost information is an important element agency management should consider when setting payment rates. Our 2009 recommendations to FEMA remain relevant as FEMA seeks to develop a compensation methodology as required by the Biggert-Waters Act. They included that the agency should determine whether data reported to NAIC could be used to set WYO compensation rates and that FEMA develop comprehensive analysis strategies to annually test the quality of the data. Although FEMA has reported improvements to data that WYO companies submit, FEMA stated that although it has compared underwriting expense ratios to the related allowances it pays insurers, it has not yet compared WYO companies’ reported expenses to the payments it makes to the WYO companies and determined the companies’ profits due to resource limitations. As a result, and as we noted in 2009, FEMA does not have the information it needs to determine whether its payments are appropriate and how much profit is included in its compensation of the WYO companies. In addition to being helpful in identifying potential inconsistencies in expense reporting, such a comparison of compensation payments and actual expenses would help FEMA to identify differences in how individual companies operate and the related effects on company expenses and profit. As discussed previously, we found that the manner in which a WYO company operates has an effect on its expenses and profits and is thereby relevant for FEMA to take into consideration as it develops its new compensation methodology. FEMA’s completion of additional actions to improve data quality and transparency and accountability over compensation will help it meet Biggert-Waters Act requirements. Data on over- and underpayment of claims in fiscal years 2008–2015 varied in over- and underpayments identified, depending on the type of review conducted as part of FEMA’s NFIP claims oversight. FEMA officials, some WYO company representatives, and some stakeholders agreed that over- and underpayment of NFIP claims were not widespread and cited several factors that contributed to over- and underpayment issues. A recent DHS OIG report found that, among other things, FEMA was unable to ensure that WYO companies were properly implementing NFIP and unable to identify systemic problems in the program. Currently, a FEMA working group is developing a new WYO oversight plan to address financial oversight, claims, underwriting, appeals, and litigation. To obtain information about over- and underpayments of NFIP claims, we reviewed available data from FEMA documenting triennial claims operation reviews, improper payment reviews, claims reinspections, biennial audits, and audits for cause for fiscal years 2008–2015. We found that the extent of over- and underpayments varied, depending on the type of review conducted. The vast majority of WYO companies received satisfactory ratings in FEMA’s recent claims operation reviews and overpayments by companies and the DSA ranged from 2.7 percent to 6.7 percent of claim amounts reviewed. Between fiscal years 2008 and 2015, the number of WYO companies that received unsatisfactory ratings on their claims operation reviews ranged from zero to three each year. Under the current Financial Control Plan, FEMA reviews samples of WYO claim files during claims operation reviews. FEMA reviewers note findings as critical and noncritical errors and allow a 19 percent error rate under the current Financial Control Plan; an overall error percentage of 20 percent or higher is a basis for an unsatisfactory rating. According to an August 2016 WYO bulletin, FEMA planned to reduce the acceptable error percentage for claims operation reviews to 10 percent starting in fiscal year 2017 to better encourage WYO companies to adopt policies and practices designed to more accurately handle flood insurance claims and ensure that WYO companies pay all claims authorized by the Standard Flood Insurance Policy. Examples of critical errors in files include claim payments that exceed the policy terms, incorrect payments, and significant payment delays. FEMA’s review steps provide an opportunity for WYO companies to respond to and resolve errors before the agency issues a final report. In fiscal year 2015, FEMA’s claims operation review of 866 claims found 23 overpayments totaling $80,202 and 15 underpayments totaling $93,256. The percentage of overpayments in 2015 was lower than in previous years, while the number and percentage of underpayments was higher than in previous years (see table 5). FEMA officials said that for 2013, the particular companies selected for review or lower losses overall might have contributed to fewer overpayments compared to other years. FEMA officials noted that although claims operation reviews required identifying a selection of claim files for review, results were not generalizable to the larger population of claim files for a WYO company or across NFIP. Under the current Financial Control Plan, FEMA can refer WYO companies with unacceptable performance to the Standards Committee, which can recommend appropriate remedial actions for companies with performance issues. For example, the committee can require WYO company managers to address performance issues at a committee meeting, require a WYO company to develop and satisfy a plan to remedy its performance issues, monitor performance until the WYO company achieves acceptable levels of performance, and recommend that FEMA not renew a company’s WYO arrangement. In 2002, the Standards Committee recommended that FEMA not renew one company’s WYO arrangement. According to FEMA officials, the company’s inability to resolve underwriting errors contributed to its departure from the WYO program. Since 2008, five WYO companies have appeared before the Standards Committee to address performance issues. One WYO company appeared in 2011 to address unsatisfactory underwriting and claims operation reviews. Three other WYO companies appeared between 2008 and 2010 to address unsatisfactory underwriting operation reviews; one of these companies was among the largest group of writers of flood insurance from 2008 to 2014. The other company, also among the largest group writers of flood insurance from 2008 to 2014, appeared before the Standards Committee in 2014 to address its administrative processes for debt collection. According to FEMA officials, DHS’s Office of the Chief Financial Officer conducts improper payment reviews annually. These reviews examine NFIP policies written by WYO companies as well as those written by the DSA. Under IPIA, an improper payment is any payment that should not have been made or that was made in an incorrect amount (including overpayments and underpayments) under statutory, contractual, administrative, or other legally applicable requirements. According to FEMA officials, improper payment reviews identify a statistically valid sample and the results are generalizable to the entire population, whereas FEMA’s claims operation review results, as discussed previously, are not generalizable. They said another difference is that the claims operation reviews select entire claim files for review, while the improper payment review tests individual payments; a claim file can include multiple payments. FEMA’s most recent improper payment reviews found that improper payments in NFIP claims for fiscal years 2012–2014 occurred less than 0.2 percent of the time, well below FEMA’s threshold of 1.5 percent (see table 6). For example, the fiscal year 2014 review of 338 payments found two improper payments (one overpayment and one underpayment), for an error rate of approximately 0.16 percent. According to a recent IPIA audit report, errors can be typographical such as inconsistencies in recording payment amounts across building estimates, final reports, claims summaries, or checks issued. Errors can also derive from the estimation of recoverable depreciation. For example, an adjuster might not have included replacement cost value in the final claims payment calculations. As previously discussed, until 2015, FEMA conducted routine reinspections of claims files, randomly selected by flood event, size of loss, or class of business. In addition, FEMA selected claims for reinspection in response to requests from within the agency, WYO companies, appeals from policyholders, and requests from Congress (special assist reinspections). Starting in 2015, FEMA discontinued the routine reinspections. According to agency officials, they discontinued this type of review because the annual IPIA review provided comparable information. In August 2016, FEMA officials confirmed that the agency planned to continue conducting special assist reinspections, and also was piloting a random claims quality check to review and analyze NFIP claims early in the claims process to identify any systemic claims processing issues associated with particular flood events. From 2010 to 2015, the agency reviewed from around 50 to more than 2,400 claim files each year through a combination of routine reinspections and special assist reinspections. In this period, FEMA’s reinspections identified underpayments totaling more than $5.95 million and overpayments of about $2.34 million (see table 7). According to FEMA officials, heightened interest in claim underpayments following Hurricane Sandy might have led to an increased interest in reviewing for possible underpayments in recent years. The total number of claims reinspected increased in 2013, after Tropical Storm Isaac and Hurricane Sandy in 2012, which as of July 31, 2016, caused more than $555 million and $8.3 billion in NFIP losses, respectively. The number of claims FEMA reinspected generally declined between 2013–2015. A FEMA official noted that the decline was due to the fact that FEMA bases the number of reinspections it conducts on the number of claims received as a result of flooding events and that there were no significant flooding events during this time period. Table 8 shows the numbers and types of claims reinspections initiated from fiscal years 2010 through 2015. No WYO companies received unsatisfactory biennial financial audit ratings during fiscal years 2010–2015; prior to that, two WYO companies received unsatisfactory ratings in 2009. Most recently, in fiscal year 2015, FEMA conducted 37 biennial audits, which resulted in 36 satisfactory ratings and 1 nonrating for which FEMA planned to follow up in its 2016 review. According to FEMA officials, a WYO company receives a satisfactory rating from FEMA when it receives an unqualified opinion from the auditor. Normally, every company receives a rating but might receive a nonrating if a company was exempt from the biennial audit in the reporting year. According to the current Financial Control Plan, FEMA’s biennial audits of WYO companies include claims, underwriting, and financial reviews. For the claims portion of the audits, FEMA identifies a random sample of a WYO company’s claim files for an independent auditor to verify, among other things, that adjuster reports contain adequate evidence to substantiate the payment or denial of claims, including the amount of losses and that building and contents allocations are correct. DSA financial-related information is handled differently. It is subject to the annual audit of DHS’s consolidated financial statements performed by an independent certified public accountant. FEMA officials told us that, as of December 2015, the contract officer responsible for managing the DSA contract met with the DSA contractor biweekly to discuss any issues with the company’s data submissions to the agency. In addition to the oversight processes described above, according to the current Financial Control Plan, FEMA can conduct audits for cause on its own initiative or upon the recommendation of the Standards Committee or OIG when certain criteria are met. According to FEMA’s Financial Control Plan Monitoring procedures, an audit for cause is a last resort if other remedies available to the Standards Committee are exhausted, OIG requests one, or agency officials believe immediate action is necessary. For example, FEMA could determine an audit for cause was necessary based on claims reinspection results showing consistent overpayments or biennial audits showing significant problems. According to agency officials, FEMA has not conducted any audits for cause as a result of biennial audits since 2007. The officials also were unaware of any audits for cause having been conducted as a result of claims reinspections. FEMA officials, some WYO company representatives, and some stakeholders agreed that over- and underpayment of NFIP claims were not widespread. We asked stakeholders about their perspectives on any over- or underpayment of NFIP claims and none who responded on this issue described NFIP over- or underpayments as widespread. Some WYO company representatives said companies do not typically consider claim over- and underpayments a significant issue because companies or their vendors have procedural safeguards to help ensure they pay claims appropriately. Some WYO company representatives said over- and underpayments of NFIP claims were caused by similar factors as over- and underpayments in other property and casualty lines. According to FEMA officials, lack of documentation was the main cause of overpayments. For example, they said overpayments could happen when the contents of a policyholder’s home were not adequately documented or an adjuster did not correctly calculate losses (use of actual cash value and incorrect depreciation figures). According to FEMA officials, WYO companies generally have reimbursed FEMA for any overpayments they identified, and the companies would request reimbursement from the insured in cases of large or potentially fraudulent overpayments. For example, FEMA recovered the $61,439 overpayment from fiscal year 2014 identified in the IPIA review. The officials said that when the DSA has identified overpayments, it also has sought to recoup the money from the recipient insureds. Representatives of several WYO companies and two stakeholders said that companies lacked incentives for underpayments and FEMA officials said underpayments were generally small and typically resulted from mathematical errors. Representatives from most WYO companies with whom we spoke said that companies typically did not track underpayments. According to FEMA officials, representatives of two WYO companies, and two stakeholders including a vendor, some policyholders lack an understanding of the terms of NFIP coverage. They said policyholders sometimes expect to be made whole after a flood event, but the NFIP standard flood insurance policy coverage is limited to direct physical losses by or from flood, depending on the type of insured property and the amount of coverage obtained. In a 2014 report, we found that homeowners may not understand their insurance coverage well enough to know what is covered, what is excluded, and what loss events and circumstances might result in paid, partially paid, or denied claims, and disaster events could highlight differences between consumers’ expectations for insurance and their actual coverage, resulting in added frustrations. Representatives of some WYO companies and a few stakeholders said factors related to the nature of the claims process and large loss events contributed to over- and underpayment issues, including the following: Nature of the claims adjustment process. The claims adjustment process can lead to differences across claims. Representatives of one WYO company said that claim adjusters must make judgment calls with respect to calculating depreciation. For example, three experienced adjusters might calculate three slightly different estimates for the same claim, according to representatives of another WYO company. Large claims volume. According to two stakeholders, processing a large volume of claims can contribute to claims processing errors and lead to increased perceptions that over- and underpayments are an issue. Inexperienced adjusters. Lack of qualified adjusters after large storms can lead to claims processing errors. Representatives of a WYO company said public adjusters often lacked NFIP experience. To meet immediate needs for assessing damage caused by recent large storm events, FEMA provided a limited waiver of claim adjuster certification. According to a stakeholder, this practice led to hiring claim adjusters who otherwise would not have met FEMA’s qualifications. Representatives of the WYO company said inexperienced adjusters might give claimants false hope for the amount of claims they might receive, leading to perceptions of underpayments. A representative from another WYO company said adjusters learn on the job, and having a few errors was not unusual for a complex line of business like NFIP. In addition, representatives of the WYO company said streamlining adjusting software could help address this issue. The WYO company representatives and a stakeholder said training and additional oversight of adjusters was needed. Changes by FEMA to the Standard Flood Insurance Policy claims process. In the 4 years following Hurricane Sandy, FEMA issued several bulletins outlining processing changes for claims associated with the loss event, which may have further complicated what some described as already a complex process. Among these changes, FEMA allowed WYO companies to pay claims after receiving an adjuster’s estimate but before a policyholder provided all necessary paperwork, with the expectation that additional payouts would be required once the losses were fully documented. FEMA issued three extensions to the 60-day filing window for policyholders to submit proof of loss information to their WYO insurer, extending the filing window to 1 year, then 18 months, and finally, 24 months after the event. Market fluctuations. Replacement cost calculations in the data might change between the time an adjuster develops an estimate and a contractor begins repairs. In March 2016, a DHS OIG report found that although FEMA performed the required oversight reviews of WYO companies in accordance with the agency’s Financial Control Plan, it could improve its processes. For example, the OIG report stated that FEMA was not using the results from its Financial Control Plan reviews—including claims operation reviews, biennial audits, and claims reinspections—to make WYO program improvements because the agency lacked adequate guidance, resources, or internal controls. Among other findings, OIG found that FEMA was unable to ensure that WYO companies were properly implementing NFIP and unable to identify systemic problems in the program. FEMA management acknowledged that NFIP lacked a consistent or reliable method to identify systemic problems or recognize patterns or warning signs. The OIG report recommended that FEMA develop and implement procedures to evaluate the results of the oversight under the Financial Control Plan and determine the overall effectiveness of established NFIP internal controls. In response, the agency planned to evaluate the Financial Control Plan review process and make recommendations to improve its oversight of WYO companies, which are expected by December 30, 2016. As of August 2016, a FEMA working group was developing a new WYO oversight plan to address financial oversight, claims, underwriting, appeals, and litigation, to be completed by January 2017. According to FEMA officials, the working group would update the Financial Control Plan after developing the WYO oversight plan. They said FEMA planned to monitor WYO company error rates on claims and underwriting operation reviews as part of its WYO company oversight, and its oversight would include performance measures. Prior to the issuance of the OIG report, FEMA had begun evaluating the customer experience to further identify ways to align NFIP and FEMA’s processes around the policyholder. For example, according to agency officials, in 2015 FEMA surveyed approximately 2,000 policyholders to understand customers’ priorities and found, among other things, that customers would prefer a simplified program and more coverage choices. Furthermore, FEMA has begun reorganizing FIMA, including separating the department into two branches—one to oversee the WYO program and the other to oversee the DSA—and establishing separate claims and claims appeals processes. To improve claims processing, FEMA planned to gather more real-time claim data from WYO companies and the DSA to enhance the customer experience and detect problems or errors as they occur. To improve the claims appeals process, the agency established a new appeals branch within FIMA’s Policyholder Services Division devoted to redesigning and overseeing the appeals process and planned to implement changes by December 30, 2016. According to FEMA officials, these changes would help address a March 2016 OIG recommendation that the agency properly document and update existing procedures for the claims appeal process. In addition, in an effort to understand how policyholders move through the claims process after flood events and possible issues with that process, FEMA began obtaining detailed claims information from WYO companies on a weekly basis. According to agency officials, the data, while unverified and unedited, provided insights into the claims process not previously available to FEMA following large loss events. According to our analysis and interviews, the current WYO arrangement provides advantages to consumers and insurers but disadvantages to FEMA in overseeing a large number of companies. While potential alternatives involving fewer participating WYO companies could ease oversight for FEMA, these alternatives could lead to reduced market penetration, among other trade-offs. Most WYO companies we interviewed preferred the current WYO arrangement over any of the three potential alternatives we identified. All the potential alternatives involve FEMA contracting with participating companies, a status that most WYO company representatives cited as creating more regulatory burden because of federal contract requirements. Based on our analysis and interviews with FEMA, WYO companies, and stakeholders (relevant organizations and vendors), the current WYO arrangement has trade-offs (see table 9). For example, while competition among the approximately 75 companies under the current arrangement may lead to improvements in customer service, the large number of companies increases the amount of oversight FEMA must provide. Representatives of most WYO companies and several stakeholders with whom we spoke preferred the current arrangement over adopting an alternative structure for the program. Representatives of some WYO companies said the current approach is predictable. This stability could continue to encourage WYO participation. However, a few stakeholders and representatives of a few WYO companies said costs for WYO companies had increased with recent legislation, which could discourage WYO participation in the future. Under the DSA contract, FEMA may direct changes within the general scope of the contract regarding the description of services to be performed, time of performance, and the place of performance of the services, but it must compensate the contractor for these changes. For example, if there is a change in law or regulation after the execution date of the contract that affects the contractor’s performance of the services, FEMA must compensate the contractor, through an equitable adjustment, for the changes. We discuss federal contract requirements and differences between the DSA contract and the WYO arrangement in more detail in appendix II. While FEMA uses proxies to compensate WYO companies, it compensates the DSA based on a predominantly fixed-price contract (tied to a fixed-price per policy, based on policy type). Our review of contract modifications showed an example in which the DSA sought equitable adjustments from FEMA for changed work caused by implementation of the Biggert-Waters Act and the Homeowner Flood Insurance Affordability Act of 2014 (HFIAA). In the modifications we reviewed, the DSA generally was compensated for its estimated additional costs imposed by the change if it could prove the changes affected the work under the terms of the original contract. For example, in December 2014 FEMA equitably adjusted the DSA contract to pay an additional $830,070 to implement the Biggert-Waters Act and $125,531 to implement HFIAA, which repealed and modified certain provisions of the Biggert-Waters Act. WYO companies also were affected by these changes but representatives of two WYO companies and a stakeholder said WYO companies were not able to request additional compensation to recoup additional costs. In other comments about the current arrangement, FEMA officials and representatives of one WYO company said FEMA oversight of vendors that administer policies was needed. FEMA’s current oversight processes do not include direct oversight of vendors. According to FEMA officials, nine vendors serviced about 85 percent of NFIP policies as of May 2015. One stakeholder—a vendor—noted that FEMA auditors frequently visited the vendor to conduct triennial claims operation reviews and biennial financial audits of the WYO companies that the vendor serviced. FEMA officials noted that the agency’s relationship is with the WYO company and, therefore, its oversight was specific to WYO companies and did not include any requirements for vendors. Three potential alternatives to the current structure for the WYO program each involve trade-offs, although WYO company representatives and stakeholders generally preferred the third alternative that would maintain a WYO network. All three potential alternatives involve FEMA contracting with participating companies (WYO companies or vendors), a status that most WYO company representatives cited as creating more regulatory burden because of federal contract requirements. (We discuss federal contract requirements and the views of WYO companies about the program being premised on contracting in more detail in app. II.) More specifically, we identified the following three potential alternatives (see fig. 1): Alternative 1: FEMA contracts with one or more insurance companies. FEMA would solicit offers for a contract with one or more insurance companies to sell and service flood policies and adjust claims. Alternative 2: FEMA contracts with one vendor. FEMA would solicit offers for a contract with a flood insurance vendor to service flood policies. The arrangement would be similar to the NFIP Direct program. The vendor would sell flood insurance policies through independent insurance agents, and insurance companies would not be involved. Alternative 3: FEMA contracts with multiple vendors and maintains the WYO network. The WYO companies would sell flood policies, while one or more vendors would service the policies. FEMA would solicit offers for contracts from multiple flood insurance vendors to service flood policies. Insurance companies that wanted to sell flood insurance would contract with one or more of the vendors to service flood policies sold by insurance company agents. Because FEMA would pay vendors to administer the flood policies, participating insurance companies would not incur any operational expenses for their flood line; rather, FEMA would pay the insurance companies a sales bonus for performance. We previously reported that the three alternatives had advantages and disadvantages in terms of the potential impact on the basic operations of administering flood insurance policies and adjusting claims, as well as on FEMA’s oversight of the program and its contractors. In the following analysis, we discuss the trade-offs of each alternative based on four factors that we identified: the cost to WYO companies, oversight by FEMA, market penetration, and WYO company participation. Alternative 1, in which FEMA would contract with one or more insurance companies to sell and service flood policies and adjust claims, would maintain the WYO company network to some extent but likely would involve fewer participating WYO companies (see table 10). Some stakeholders said that many current WYO companies would elect not to participate in a bid process because they opposed becoming federal contractors. However, representatives of one WYO company said that by not participating, these companies would lose a competitive advantage. That is, offering flood insurance in addition to home, life, and automobile insurance allows participating multiline insurers to address multiple insurance needs of their customers. Representatives of another WYO company said that WYO companies with in-house servicing capabilities would have a competitive advantage over other companies that use third-party vendors. Fewer WYO companies could or could not represent an advantage for FEMA. Oversight might be easier than that required for the approximately 75 WYO companies in the current arrangement as of September 2016. Representatives of one WYO company said FEMA could collaborate more closely with WYO companies if fewer were involved in the program. However, one stakeholder said overseeing federal contracts could require expanded oversight processes and additional resources from the agency. Responding to large loss events could be more challenging with fewer WYO companies. Furthermore, a change in the composition of WYO companies could affect market penetration. (We discuss geographic concentration of market share for WYO companies later in this report.) Alternative 2, in which FEMA would contract with one vendor to service policies and sell them via independent insurance agents, similar to the NFIP Direct program, largely would eliminate insurance companies’ involvement in NFIP (see table 11). Representatives of one WYO company said transitioning to this model would be a step backward for the WYO program, which evolved from a single entity in the 1980s. In addition, representatives of another WYO company pointed out that no single insurer or vendor had the infrastructure needed to deliver NFIP coverage on such a large scale. Similar to Alternative 1, in which FEMA would contract with one or more insurance companies, representatives of a few WYO companies and a stakeholder said handling a large storm event could be even more challenging for a single entity and could have a negative effect on the customer experience generally, and after large-loss events. Furthermore, according to representatives of another WYO company and a few stakeholders, selling policies through independent agents only, rather than through independent agents and the network of agents affiliated with WYO companies currently in the program, could adversely affect market penetration. Lastly, representatives of some WYO companies said competition could be an issue under this option. For example, if one vendor won a long-term contract, the companies not selected might not maintain the ability to service the flood business, which could create a cycle in which the same vendor has a competitive advantage and is repeatedly selected. Many stakeholders generally said that Alternative 3, in which FEMA would contract with multiple vendors (to service NFIP policies) and maintain the WYO network (to sell NFIP policies), was the most appealing option of the three alternatives we identified because it would involve multiple vendors and maintain the existing WYO network (see table 12). However, this option also has significant trade-offs. This arrangement would maintain competition among vendors and WYO companies, but could lead to declines in customer service. Representatives of a few WYO companies said that by having FEMA set requirements for vendors that deliver customer service—rather than having WYO companies contract with vendors as is the current practice—WYO companies would have less control over customer service quality and could face reputational risks. However, according to representatives of two WYO companies and a stakeholder, competition among participating vendors could drive down program costs or improve customer service quality. Some WYO companies and stakeholders considered the possible impact on responses to large loss events and effects on customer service quality as important factors in evaluating potential changes to the WYO program. As mentioned previously, each alternative we identified could involve a decrease in the number of participating WYO companies. Representatives of some WYO companies and a few stakeholders said decreasing the number of WYO companies could negatively affect customer service and market penetration. Most WYO company representatives we interviewed preferred the current arrangement to any of the potential alternatives, while most stakeholders did not state a preference between the current arrangement and the alternatives we identified. Many WYO company representatives and several stakeholders provided suggestions for improving the current arrangement. Improve guidance for WYO companies. Representatives of several WYO companies said better communication was needed from FEMA, including following large loss events. For example, representatives of one WYO company and a stakeholder said that FEMA should post questions from the companies and the agency’s responses online. This would help standardize information that WYO companies received, and address the problem of getting different answers from different FEMA officials through more informal communication channels. According to FEMA officials, the agency plans to create standards-based guidance for WYO companies and reduce the amount of prescriptive guidance FEMA provided to WYO companies. Simplify the program. Some WYO company representatives and some stakeholders said NFIP coverage is more complicated to write and adjust than other property and casualty insurance coverage. Several suggested that FEMA take steps to make it easier for agents to write policies and adjust claims. According to agency officials, FEMA planned to enhance the consistency and simplicity of the NFIP product and simplify NFIP policy language within the current legislative framework, among other changes, during 2016. Reconsider agent commissions. As discussed previously, based on our data analysis and interviews with WYO companies, some WYO companies pay more to agents than the 15 percent of net written premiums that FEMA provides in compensation. Some WYO company representatives and two stakeholders said increases in agent commissions led to higher costs for WYO companies. Among these stakeholders, one vendor said that FEMA should develop better incentives for insurance agents to address this issue and increase market penetration. For example, representatives said that FEMA could establish agent compensation based on the percentage of homeowners insurance policyholders that have flood insurance or other metrics. In addition, they said FEMA could standardize agent commissions so independent agents would focus more on selling new policies rather than transferring NFIP policies from one WYO to another that pays higher commission. As previously discussed, FEMA is currently in the process of developing a new compensation methodology through rulemaking but could not provide a timeline on when this rulemaking would be complete. Provide vendor oversight. FEMA officials said it was widely acknowledged that FEMA must address its lack of vendor oversight, and said the agency was taking steps to determine how to address this issue in any changes to its WYO program oversight. In July 2015, the agency began requesting WYO companies to submit, through their vendors when applicable, sample files demonstrating the implementation of NFIP program changes 30 days before a program change became effective. While not direct oversight of vendors, FEMA officials stated that this change was part of its efforts to better ensure that system updates for implementing NFIP program changes were properly implemented. As of July 2016, FEMA officials did not identify any other plans for addressing vendor oversight. Other suggestions. In addition to suggestions on ways to improve the WYO program, two WYO companies and some stakeholders suggested other ways to improve NFIP. For example, representatives of two WYO companies and some stakeholders suggested encouraging private-sector participation in flood insurance (including eliminating a noncompete clause for WYO companies from the current arrangement, discussed later). In addition, one stakeholder suggested making flood coverage a mandatory component of homeowners insurance, establishing a different scale for quantifying flood risk, expanding policy choices through NFIP or private-sector coverage, and more closely coordinating NFIP and disaster assistance. FEMA officials told us they plan to reexamine and improve the WYO arrangement to allow for greater flexibility in the relationship between FEMA and WYO companies. In May 2016, FEMA issued a proposed rule to remove the WYO arrangement from regulation to make operational adjustments and corrections to the arrangement more efficiently. FEMA officials told us that the agency does not plan to make changes to the arrangement for fiscal year 2017. Additionally, several stakeholders and WYO company representatives with whom we spoke suggested other possible alternative structures for the WYO program. These included increasing requirements for WYO companies, removing a noncompete clause in the WYO arrangement, and adopting the federal crop insurance program model, which shares some similarities with NFIP but has some notable differences. Limiting WYO participation or increasing WYO company requirements. Representatives of several WYO companies suggested that maintaining the current WYO arrangement but limiting the number of WYO companies allowed to participate was another option. Under this option, according to WYO representatives, WYO companies would not necessarily become federal contractors, but would compete, in a sense, for available spots in the program. FEMA officials said adding other requirements for WYO companies—rather than determining the number of WYO companies allowed to participate—would be another way to achieve fewer participating companies. Removing noncompete clause. Three stakeholders, including two industry groups representing insurance companies and a vendor, said removing a noncompete clause from the arrangement (which generally prevents WYO companies from selling private flood policies) would encourage continued participation in the program and also encourage greater private-sector involvement in insuring flood risk. The noncompete clause was also cited as a potential barrier to increased use of private flood insurance by various industry stakeholders with whom we spoke as part of work we completed in July 2016 on private sector involvement in flood insurance. Adopting crop insurance model. One stakeholder suggested the federal crop insurance model as a possible alternative structure for the WYO program. Similar to the agreements between FEMA and WYO companies, companies participating in the crop insurance program—17 as of September 2016—have a 1-year agreement with the Federal Crop Insurance Corporation to sell and service policies. The crop insurance agreement is not considered a contract for the purposes of the Federal Acquisition Regulation. But unlike in the WYO program, these companies share a percentage of the risk of loss (and opportunity for gain), and the Department of Agriculture reinsures their losses, a significant structural difference between the two programs. The Federal Crop Insurance Corporation accounted for about 1.1 million policies and $9.26 billion in premiums written as of October 2016, whereas according to the most recent data available, NFIP accounted for 5.1 million policies and about $3.4 billion in federal flood earned premiums. Similar to the WYO arrangement, companies in the crop insurance program receive a percentage of the premium on policies sold to cover the administrative costs of selling and servicing these policies. In turn, insurance companies use this money to pay commissions to their agents who sell the policies and fees to adjusters when claims are filed. Unlike NFIP, the Federal Crop Insurance Corporation requires that companies submit expense amounts on a standard form, but these amounts are not audited. The Department of Agriculture considers the expense information when it renegotiates its standard agreement with insurers. Our analysis of three potential alternatives to the current WYO arrangement found that each alternative could decrease the number of participating WYO companies. We analyzed NFIP policy data to understand the geographic concentration of WYO company market share under the current arrangement. Specifically, we analyzed residential policy data to understand the geographic concentration of residential NFIP coverage and the role that large and small writers of NFIP coverage and the DSA played in different states and counties. We found that large WYO companies wrote the majority of NFIP residential policies across states and counties (see fig. 2). We considered large WYO companies as the top 10 companies in terms of NFIP market share in 2014. Overall, large WYO companies accounted for the largest share of written NFIP residential policies across states, territories, and the District of Columbia (70 percent), while small WYO companies and the DSA accounted for smaller shares of the market (16 and 14 percent, respectively). At the state level, large WYO companies wrote more than half of all NFIP residential policies in every state, while the share of policies written by small WYO companies (2 percent–38 percent) and the DSA (4 percent –28 percent) varied more. At the county level, we found that large WYO companies wrote more than half of all NFIP residential policies in 83 percent of counties across the states, territories, and the District of Columbia. See appendix III for additional analysis. FEMA has yet to implement Biggert-Waters act requirements to develop a methodology for compensating WYO companies using actual flood insurance expenses. For example, FEMA has not completed the rulemaking process and we found the flood insurance financial data WYO companies reported to NAIC are inconsistent, which limits the data’s usefulness to FEMA in setting compensation rates. Additionally, FEMA currently does not systematically consider actual flood expenses and profit when establishing WYO compensation, and has yet to compare WYO companies’ actual expenses and compensation. As recommended in 2009, FEMA should (1) determine in advance the amounts built into the payment rates for estimated expenses and profit; (2) annually analyze actual expenses and profit in relation to the estimated amounts used in setting payment rates; and (3) consider the results of the analysis of payments, actual expenses, and profit in evaluating methods for paying WYO companies. Additionally, FEMA should (4) take actions to obtain reasonable assurance that flood insurance expense data reported to NAIC can be considered in setting payment rates and (5) develop data analysis strategies to annually test the quality of flood insurance data the companies report to NAIC. Fully addressing these recommendations will help FEMA meet the Biggert-Waters Act requirement to develop a methodology for determining appropriate compensation for WYO companies that uses the companies’ actual flood expenses. FEMA is still in the process of revising its compensation methodology. Based on our analysis, how a WYO company operates has an effect on its expenses and profits. For example, company-specific factors such as compensating independent agents to sell policies or third-party vendors to service policies, and the manner in which a company allocates overhead expenses, can result in varying expenses and profit. Gaining such an understanding of the WYO companies’ operations, which can contribute to year-to-year fluctuations in expenses and profit, would allow FEMA to more effectively revise its compensation methodology. Moreover, this understanding, coupled with improved data on WYO company expenses, also would facilitate any future consideration that FEMA might make of alternative structures for the WYO program. Finally, considering that the compensation of WYO companies is a significant part of the total premiums policyholders pay, FEMA may seek to achieve the program’s objective of making flood insurance available at affordable rates in part by establishing reasonable compensation rates that appropriately consider WYO company expenses, profits, and operating characteristics. To improve the transparency and accountability over the compensation paid to WYO companies and set appropriate compensation rates, the FEMA administrator should take into account WYO company characteristics that may impact companies’ expenses and profits when developing the new compensation methodology and rates. We provided a draft of this report to FEMA within the Department of Homeland Security, NAIC, and FIO within the Department of the Treasury for review and comment. DHS and NAIC provided technical comments, which we incorporated, as appropriate. DHS also provided a written response, reproduced in appendix IV, in which FEMA concurred with our recommendation and agreed that fully understanding the characteristics of the insurance companies that participate in the WYO program can help in determining compensation. FEMA responded that it intends to comply with the rulemaking requirement of section 224 of the Biggert-Waters Act and, when completed, will implement a new compensation methodology to track, as closely as practicably possible, the actual expenses of the WYO companies. Agency officials noted that as FEMA must implement this recommendation via rulemaking, it is unable to provide more specific information or a time frame at this time. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to DHS, NAIC, and Treasury, and interested congressional committees and members. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202)-512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Our objectives in this report were to describe the (1) Federal Emergency Management Agency’s (FEMA) current compensation practices for Write- Your-Own (WYO) companies and the extent to which FEMA revised its practices in response to the Biggert-Waters Flood Insurance Reform Act of 2012 (Biggert-Waters Act); (2) information on over- and underpayments of National Flood Insurance Program (NFIP) policy claims; and (3) the trade-offs of selected potential alternatives to FEMA’s current arrangement with WYO companies for selling and servicing flood insurance policies. To address all three reporting objectives, we reviewed our prior reports and reports from the Office of Inspector General (OIG) of the Department of Homeland Security; relevant laws and regulations; and FEMA documentation and guidance. We also interviewed officials from FEMA and representatives from 10 WYO companies with varying NFIP premium bases. Specifically, we selected a non-generalizable, purposive sample of 10 WYO companies, selected based on net premiums written to capture companies with a large market share of premiums written, as well as to obtain the opinions of different sized WYO companies on their involvement in NFIP. Also, to obtain a broader range of perspectives, we included two WYO companies in this group of 10 because they did not use subcontractors (vendors) to service policies. This is the first group of 10 WYO companies we selected. We later identified a second and third group of 10 WYO companies to address other aspects of our reporting objectives. For our first objective, we reviewed the Biggert-Waters Act, other laws and regulations relevant to FEMA’s compensation practices, and FEMA documentation, such as WYO Bulletins (which FEMA publishes to inform WYO companies, and the public, of updates or changes to NFIP, including compensation practices). To identify any changes FEMA made to its compensation methodology since our August 2009 report, we reviewed WYO Company Bulletins issued between January 2008 and August 2016. We also obtained and reviewed FEMA’s compensation packages for WYO companies for fiscal years 2010–2016. To understand the status of FEMA’s implementation of recommendations from our 2009 report and section 224 of the Biggert-Waters Act, which built on our recommendations, we interviewed FEMA officials on any steps the agency had taken to improve the quality of WYO company expense data and on its progress in implementing related Biggert-Waters Act requirements. We also interviewed National Association of Insurance Commissioners (NAIC) officials about expense data WYO companies report to NAIC. In addition, we interviewed the first group of 10 WYO companies (discussed at the start of this appendix) on compensation issues, including how expenses were incurred and reported. To compare FEMA compensation paid to WYO companies to actual expense data WYO companies reported to NAIC, we obtained and analyzed premium, loss, and compensation data for all WYO companies for fiscal years 2008–2014 from FEMA and premium, loss and expense data for all WYO companies from SNL Financial and NAIC for calendar years 2008–2014. For purposes of our analysis, we retrieved federal flood line data reported to NAIC from SNL Financial. To make the FEMA and NAIC data comparable, we converted FEMA’s fiscal year data to a calendar-year basis to match the period for reporting to NAIC. We also converted FEMA reported paid losses and loss adjusted expenses to an accrual basis to be able to appropriately compare loss adjustment compensation and actual expenses. We then calculated estimated profit for each WYO company as the difference between the calendar year compensation reported to FEMA and calendar year expenses reported to NAIC. The estimated profits, calculated using the data provided by FEMA and NAIC data obtained from SNL Financial, did not correspond to our expectations of profits from our 2009 work. To better understand WYO companies’ accounting and reporting of federal flood data, we made another (second) selection of 10 WYO companies that comprised the majority of net written premiums (about 60 percent), paid losses (about 52 percent), and total compensation (about 60 percent) during 2008–2014. Specifically, we selected a nongeneralizable, purposive sample of 10 WYO companies, selected based on net premiums written during 2008–2014. We overselected WYO companies with a larger share of the market because of their relevance in the flood insurance market. We interviewed these WYO companies and requested and examined additional information and data they provided. We used this additional information and data to evaluate the causes of differences in reported premiums and losses and estimate the effect those differences had on the companies’ compensation and expenses. We also used this information and data to estimate various underwriting and loss adjustment expenses to corroborate statements the companies made to us regarding the amount they pay their vendors and adjusters. We analyzed the companies’ commission, underwriting, and loss expense ratios, profits as a percentage of total compensation, and reported loss and loss adjustment expense reserves to corroborate statements the companies made regarding changes in their accounting and reporting practices between 2008 and 2014. Based on the additional information and data provided and our analyses, we made adjustments to the expenses reported to NAIC for unreported expenses, reclassifications of expenses, and the effects of different loss adjustment expense estimates and recalculated estimated profit (on a pre-tax basis) for these 10 WYO companies for calendar year 2014. Our analysis and ability to estimate WYO company expenses and profit were subject to a number of limitations. First, the adjustments we made to the companies’ reported expenses were based on information provided by the WYO companies. WYO company representatives provided supplemental financial data and made various representations to us, and while we reviewed the data and representations for reasonableness in relation to other information we had, we did not obtain all evidence necessary to fully validate this additional information. Second, we initially sought information from the 10 selected WYO companies that would allow us to compare compensation and actual expenses and estimate profit for each company for the years 2008-– 2014. However, due to challenges in obtaining sufficient information and documentation from all companies to support their accounting and reporting practices for each of those years and assess the consistency of such reporting from company to company and year to year, we limited our calculation of profit to a single year—2014. Further, as our 2014 estimates of company expenses and profits are an outcome of our effort to understand the issues surrounding the inconsistent financial reporting by selected WYO companies and the various factors that can affect company expenses and profit, these estimates should not be taken to be a static or predictable indicator of WYO company profits. Third, two WYO companies stated that only expenses that could be specifically identified as flood-related, including vendor fees, were reported to NAIC on their Insurance Expense Exhibits. One WYO company said that overhead expenses were not allocated to the federal flood line because this line of business was not considered as significant relative to the company’s other property insurance lines. We did not obtain information from the companies that would allow us to assess the significance of these unallocated overhead expenses to our estimates of flood line profits. Fourth, some of the companies we reviewed use affiliated companies as vendors to service flood policies. As information on the affiliated companies’ activities and profits was not available to us, we could not determine the extent to which intercompany profits were reflected in the expenses reported by these WYO companies and the extent to which fees charged by these affiliated vendors might have exceeded what otherwise would be charged by a third-party vendor. We assessed the reliability of the FEMA data by reviewing audit documentation from prior GAO engagements; and audit documentation from and related reports issued by the Department of Homeland Security’s external auditor supporting its work on WYO program financial data included in the department’s fiscal year 2014 financial statements. In addition, we performed electronic and manual data testing for missing data, outliers, and other obvious errors, recalculated various types of WYO compensation paid to WYO companies, and spoke with knowledgeable agency officials about the data. For the NAIC data, we reviewed related documentation and interviewed knowledgeable officials. We assessed the reliability of the SNL Financial data by comparing it to NAIC data to ensure its accuracy and consistency. We confirmed the accuracy of the FEMA and NAIC data for the 10 selected companies by requesting additional information from the companies. However, we did not audit whether the FEMA and NAIC data were in accordance with financial reporting standards and requirements. We determined that these data were sufficiently reliable for the purpose of assessing the alignment of compensation amounts with actual expenses and for estimating the profits of a selection of WYO companies. For our second objective, we reviewed data from FEMA documenting its WYO company oversight processes. The data we reviewed pertain to the triennial claims operation reviews, improper payment reviews (which the agency conducts as required by the Improper Payments Information Act of 2002, as amended), reinspection of claims, and biennial audits. We assessed the reliability of the data by reviewing related FEMA documentation on the data and interviewing knowledgeable agency officials. We determined that these data were sufficiently reliable for the purpose of reporting on FEMA’s oversight of claims and the results of these reviews. We also reviewed other FEMA documentation on its oversight of the claims process (such as FEMA’s Financial Control Plan and Financial Control Plan Monitoring Procedures) to understand FEMA’s oversight processes; a recent Senate Banking Committee investigation report; and an OIG report that discussed issues associated with over- and underpayment of claims. We interviewed FEMA officials about the agency’s oversight of the claims process, potential causes for over- and underpayments, and how they are resolved. We also interviewed the first group of 10 selected WYO companies as well as stakeholders on their views about the over- and underpayment of claims. Specifically, we selected and interviewed 14 stakeholders representing a variety of organization types with knowledge of flood insurance and the WYO program. These stakeholders included three vendors with whom WYO companies contract, and officials from 11 organizations comprised of industry groups representing insurance companies and agents, and academics. We interviewed officials from these entities to obtain diverse perspectives on the possible extent and potential causes of over- and underpayments of claims. Our work focused on over- and underpayments and did not examine specific claims related to any specific event. For our third objective, we reviewed a prior GAO report and conducted a literature review to identify potential alternative approaches to FEMA’s agreements with WYO companies for selling and servicing flood insurance policies and examine trade-offs for these approaches. We targeted our literature review to identify academic research and published studies on flood insurance, broadly, and those that discussed alternatives to the WYO arrangement. Our query identified around 60 document summaries, from which we identified 19 for further analysis. Of the 19, all provided background information on flood insurance and the NFIP program, but none presented clear alternatives to the WYO arrangement. From our prior work, we identified three potential alternative approaches to the current WYO arrangement: (1) FEMA contracts with one or more insurance companies; (2) FEMA contracts with one vendor; or (3) FEMA contracts with multiple vendors and maintains the WYO network. After initial interviews with WYO company representatives and stakeholders indicated that alternatives to the current arrangement could decrease the number of participating WYO companies, we analyzed FEMA NFIP policy data to understand the geographic concentration of NFIP policies written for homeowners by WYO companies. Our analysis looked at policy data for residential policies under the current WYO arrangement and the geographic concentration of market share for large and small writers of NFIP coverage and the Direct Servicing Agent (DSA) in different states and counties. As part of the analysis, we reviewed the proportion of residential policies written by WYO companies and the DSA in counties by population, based on county population categories used by the Department of Agriculture’s Economic Research Service. For purposes of this analysis, we considered large companies as those among the 10 insurance groups whose members wrote the greatest amount of NFIP coverage in 2014, the most recent year of available data (our third group of 10 WYO companies selected). The methodology for selecting these 10 WYO companies differed from the methodologies for the previous two selections discussed. This third group of 10 insurers we identified as large WYO companies accounted for an 80 percent cumulative share of the federal flood market in 2014 (not including DSA policies), with individual market shares ranging from approximately 2 percent to 20 percent. We considered all other insurers as small WYO companies, with market shares ranging from 0 to 1.5 percent and a cumulative market share of 20 percent. We tested the reliability of the NFIP policy data by reviewing related documentation, conducting electronic and manual data testing, and reviewing prior GAO assessments of the data. We included only residential NFIP policies in our analysis to focus our analysis on the market penetration related to homeowners. In addition, we excluded from our analysis 1,506 policies the geographic location of which could not be determined from FEMA’s data. These policies accounted for 0.03 percent of the total number of policies in the data set. We found these data reliable for the purpose of identifying the geographic location of policies written by WYO companies and the DSA. In addition, we analyzed the proportion of NFIP residential policies written by WYO companies and the DSA on a statewide basis for five states with the highest total NFIP payments since 1978. Based on FEMA data as of June 30, 2016, the five states with the highest total loss payments were (in order of magnitude) Louisiana, Texas, New Jersey, New York, and Florida. We assessed the reliability of these data by reviewing FEMA data definitions and previous GAO assessments of the data. We determined that these data were sufficiently reliable for the purpose of identifying states with the highest total NFIP loss payments. We also compared requirements of NFIP’s WYO arrangement and FEMA’s DSA with some federal contract requirements. As previously noted, we included several vendors among the 14 stakeholders with flood insurance expertise we selected and interviewed to understand the trade- offs for the program being run by one vendor (the second alternative approach we previously identified). Furthermore, we compared the general structure of the insurance arrangement under the Department of Agriculture’s Federal Crop Insurance Corporation with the WYO arrangement, based on our prior work reviewing the crop insurance program. We obtained perspectives from FEMA officials, representatives of WYO companies (those selected based on net premiums written), stakeholders with flood insurance expertise, and the Federal Insurance Office of the Department of the Treasury on potential alternative structures for the WYO program. We analyzed the tradeoffs of the alternatives based on four primary factors: potential costs to participating insurers, FEMA oversight, market penetration, and WYO company participation. We identified these four factors based on our prior work evaluating these arrangements and initial interviews with industry participants. We also obtained their perspectives on other possible improvements to NFIP. We conducted this performance audit from April 2015 to December 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In 2009 and again for this report, we identified potential alternative administrative structures for the National Flood Insurance Program’s (NFIP) Write-Your-Own program, which, if possible, could replace the WYO arrangement, each of which involve participating companies (WYO companies or vendors) becoming federal contractors. In the WYO program, private insurers sell and service flood insurance policies and adjust claims for NFIP under an arrangement with the Federal Emergency Management Agency (FEMA). In general, executive agencies must award contracts using full and open competition. In addition, contracts generally must include certain clauses related to contract administration, such as those that provide the government the ability to terminate contracts, as well as those required by statute and executive orders that implement U.S. policy. The following analysis discusses requirements that generally apply to contracts under the Federal Acquisition Regulation (FAR) and how they compare to the WYO arrangement and FEMA’s contract with the Direct Servicing Agent (DSA). The DSA is a FEMA contractor that writes NFIP policies and provides an alternative when a WYO company is unable or unwilling to write a flood insurance policy. The analysis also includes the views of WYO companies about changing the WYO program arrangement to a contract subject to the FAR. Open competition. Executive agencies generally must seek to obtain “full and open competition” in the contract award process (subject to exception). This means that all responsible sources are permitted to compete. The DSA selection process includes full and open competition, but insurance companies do not compete to participate in the WYO program. Instead, companies must apply to participate, and FEMA approves the participation of companies that meet certain criteria, rather than selecting companies based on their bids for a contract. Requirements for a company to participate in the WYO program include their experience in property and casualty insurance lines, good standing with state insurance departments, and ability to meet NFIP reporting requirements to adequately sell and service flood insurance policies. FEMA officials told us that the agency does not track how many companies failed to gain approval to participate in the program, but noted that many companies failed to obtain approval because they did not meet the requirement for having 5 years of experience as a property and casualty insurer. Bid protests and dispute processes. Federal acquisition regulations and statutes provide for bid protests—where interested parties can, for example, protest the award of a contract (e.g., if company A wins the contract, company B can challenge the award). In addition, federal acquisition statutes and regulations provide procedures and requirements for resolving claims and disputes that arise during contract performance. DSA contract awards can be thus protested. The current WYO arrangement does not include a process to protest FEMA’s selection of WYO companies, but if any misunderstanding or dispute arises between a WYO and FEMA about any factual issue under the arrangement or in relation to FEMA’s nonrenewal of a WYO company’s participation in the program, the company can submit the dispute to arbitration. Government as a party to a contract. Federal contracts generally provide an agency the right to unilaterally terminate the contract— either for the convenience of the government or for the default of the contractor. Under a termination for convenience, the government can completely or partially terminate the work under a contract when it is in the government’s interest. Agencies generally can make certain unilateral modifications to a contract during performance as long as those changes fall within the contract’s scope. The DSA contract allows the Department of Homeland Security to terminate the contract if it would be in the best interest of the government in the event that a contractor discovers a conflict of interest, or in the event a contractor intentionally did not disclose a conflict of interest. In contrast, the WYO arrangement does not explicitly provide agency control over termination, but in the event that a company is unable or otherwise fails to carry out its obligations under the arrangement, the company must transfer the NFIP policies it issued to FEMA or propose that another WYO company assume responsibility for those policies. Contract type and contractor costs. Depending on the contract type, the government may or may not have insight into contractor costs. For example, a cost reimbursement type contract—where the government pays for allowable incurred costs to the extent prescribed in the contract—can only be used if the contractor’s accounting system is adequate for determining costs applicable to the contract. For fixed-price-type contracts, where full responsibility for all costs is placed on the contractor, the government would not have visibility into contract costs. For example, the DSA has a hybrid firm-fixed-price and time-and-materials contract with FEMA with a 1-year base period and 4 one-year option terms. FEMA pays a fixed price per policy on a monthly basis based on the type and number of policies the company services (standard, group flood, and severe repetitive loss), as long as the company meets the performance requirements included in the contract. FEMA also pays the DSA for line items based on the amount of time and materials the company spends on certification and accreditation activities. The contract allows the contractor to recoup cost increases stemming from changes to the contract. For example, the DSA sought and obtained a series of payments from FEMA for extra work the contractor conducted as part of implementing the Biggert-Waters Act and the Homeowner Flood Insurance and Affordability Act. As discussed in more detail in the report, the WYO arrangement does not prescribe detailed cost and pricing guidance to companies but generally compensates WYO companies using proxies to determine rates at which it pays them. For example, the arrangement provides that WYO companies may retain 15 percent of net written premiums as the allowance for insurance agent commissions. Ethical practices and statutory compliance. Depending on the type of contract, there are also a variety of requirements imposed under statutes and executive orders that can have major effects on business practices. These include provisions related to bribery, false claims, false statements, conflicts of interest, and kickbacks; lobbying restrictions; equal opportunity and affirmative action requirements; subcontracting and sourcing; small business and veteran participation; and compliance with labor standards and drug-free workplace requirements. For example, the DSA contract requires the company to use Department of Labor wage determinations and outlines the types of benefits employees must receive, including health and welfare benefits, paid vacation, and paid holidays. The current WYO arrangement does not speak to all of the factors outlined above, but provides that a WYO shall not discriminate against any applicant for insurance because of race, color, religion, sex, age, handicap, marital status, or national origin. Representatives of seven of 10 WYO companies we interviewed (for all three objectives, as described in app. I) opposed WYO companies becoming federal contractors, citing burdensome requirements. Of the other three, one said the costs of becoming a federal contractor would depend on the structure of the contract, and the other two did not comment. Representatives of one WYO company said a positive aspect of having a contract is that it could provide a mechanism for establishing an annual maximum to FEMA’s possible changes to the contract for NFIP regulatory changes. This could allow WYO companies or vendors to recoup some costs of implementing unexpected changes to the program. The DSA contractor has the ability to recoup the expenses it incurs in response to changes, for example to law or regulation, which affect its performance of the services under the contract. FEMA officials said WYO companies historically had opposed structuring the WYO program as a federal contractual relationship between FEMA and WYO companies since the WYO program was established and said a federal contract might not be compatible with the structure of the insurance industry and how WYO companies deliver coverage. In addition, they said that as a federal contractor, a WYO company or vendor would need to convert its information technology systems to accommodate new federal security requirements, which would be time consuming and costly. Stakeholders who commented about the use of a federal contract for the WYO program had mixed perspectives. We selected and interviewed 14 stakeholders with flood insurance expertise, based on their knowledge of flood insurance and the WYO program. One stakeholder said FEMA’s oversight might improve because the agency would have more authority to direct how WYO companies administered claims. One stakeholder—a vendor—said that although the current arrangement is not a federal contract, it can feel like a contractual agreement for WYO companies because the financial control plan outlines requirements for participating companies. Another stakeholder said that use of a federal contract for the WYO program could create more stringent requirements for WYO companies and could lead to declines in their participation and NFIP market penetration, and result in the DSA having to administer more policies. Our analysis of three potential alternatives to the current WYO arrangement found that each alternative could decrease the number of participating WYO companies. We analyzed NFIP policy data to understand the geographic concentration of WYO company market share under the current arrangement, including what proportion of NFIP residential coverage large and small WYO companies and the Direct Servicing Agency (DSA) wrote in counties and in states with high NFIP losses. We included only residential NFIP policies in our analysis to focus on market share related to homeowners. We classified WYO companies as large or small, with large companies being the top 10 WYO companies in terms of NFIP market share in 2014. The DSA is a Federal Emergency Management Agency (FEMA) contractor that writes NFIP policies and provides an alternative when a WYO company is unable or unwilling to write a flood insurance policy. We compared the share of NFIP residential policies written by WYO companies nationwide to those written by the DSA. As shown in figure 3, in more than 83 percent of counties where residential NFIP coverage was present, WYO companies wrote more than half of all policies. In contrast, the DSA wrote at least 50 percent of NFIP residential policies in 1.8 percent of counties, as shown in figure 4. In 17 counties across 11 states, the DSA wrote 100 percent of the NFIP residential policies, which accounted for 21 policies total. As shown in table 13, 81 percent of NFIP residential policies were written for properties in metropolitan counties (areas with populations of 250,000 or more). Large WYO companies accounted for the majority of the policies in states, territories, and the District of Columbia (70 percent), while small WYO companies and the DSA accounted for smaller shares of the market (16 percent and 14 percent, respectively). We also analyzed the proportion of residential policies written in counties by different categories (population and urban and rural). Large WYO companies wrote more than half of all policies in each category. The share for small WYOs ranged from 15 percent to 23 percent (with the highest share in sparsely populated rural counties) and the DSA’s share ranged from 13 percent to 18 percent in the different areas. In addition to reviewing the data on a nationwide basis, we analyzed the proportion of NFIP residential policies written by WYO companies and the DSA for five states with the highest total NFIP payments according to FEMA historical claims data since 1978. Based on FEMA data as of June 30, 2016, the five states with the highest total loss payments were (in order of magnitude) Louisiana, Texas, New Jersey, New York, and Florida. In each of these states, at least 95 percent of NFIP residential policies were located in metropolitan areas, with the majority of policies located in counties in metropolitan areas with a population of 1 million or more. In Louisiana, large WYO companies had 55 percent market share of residential policies, the DSA had 27 percent (its highest share among the five states), and small WYO companies had 18 percent. In New York and New Jersey, large WYO companies achieved their highest market share of NFIP residential policies among the five states—78 percent and 81 percent respectively. Additionally, county- level shares for large WYO companies in New York ranged from 60 percent to 93 percent (small WYO companies had 3 percent–21 percent and the DSA had 4 percent–27 percent). In Florida, large WYO companies had 70 percent of the NFIP residential market and small WYOs had 22 percent (their highest share among the five states). Large WYO companies wrote NFIP residential coverage in all Florida counties, with county-level shares ranging from 39 percent to 88 percent, (and from 11 percent to 50 percent for small WYO companies and from 2 percent to 20 percent for the DSA). In addition to the contact named above, Allison Abrams (Assistant Director); Rhonda Rose (Analyst-in-Charge); Christina S. Cantor; Heather Chartier; Pamela R. Davidson; May M. Lee; Scott E. McNulty; John Mingus; Marc W. Molino; Patricia Moye; and Barbara Roesmann made key contributions to this report. | Private insurers (WYO companies) sell and service flood policies and adjust claims for NFIP under an arrangement with FEMA. In GAO-09-455 , GAO made recommendations on FEMA's WYO compensation methodology and data quality. The Biggert-Waters Act built on these recommendations and required FEMA to develop a methodology for determining appropriate amounts WYO companies should be reimbursed. GAO was asked to review the status of FEMA efforts. This report examines, among other issues, (1) the extent to which FEMA revised compensation practices, and (2) trade-offs of potential alternatives to the WYO arrangement. GAO reviewed laws and regulations, analyzed FEMA data and data on expenses reported to NAIC for 2008–2014 (most recent available), and interviewed FEMA and NAIC officials, stakeholders (11 organizations with flood insurance expertise, three vendors), and 10 selected WYO companies with varying NFIP premium bases. To compare FEMA compensation with actual expenses, GAO examined information on accounting and reporting practices from a second selection of 10 WYO companies (in this case, insurers wiithin10 insurance groups) that received about 60 percent of compensation in 2008–2014. The Federal Emergency Management Agency (FEMA) has yet to revise its compensation practices for Write-Your-Own (WYO) companies to reflect actual expenses as required by the Biggert-Waters Flood Insurance Reform Act of 2012 (Biggert-Waters Act), and as GAO recommended in 2009. FEMA continues to rely on insurance industry expense information for other lines of property insurance to set compensation rates for WYO companies. Efforts by FEMA, the National Association of Insurance Commissioners (NAIC)—which collects data by line of insurance from insurance companies—and the WYO companies have resulted in some improvements to financial data on National Flood Insurance Program (NFIP) expenses that WYO companies report to NAIC. But GAO found inconsistencies among how 10 selected WYO companies (which received about 60 percent of the compensation FEMA paid in 2008–2014) reported federal flood data to NAIC that limit the usefulness of these data for determining expenses and setting compensation rates. For example, GAO analysis showed that adjusting for inconsistencies due to unreported expenses significantly reduced WYO company profits. Consequently, without quality data on actual expenses, FEMA continues to lack the information it needs to incorporate actual flood expense data into its compensation methodology as well as determine how much profit WYO companies make and whether its compensation payments are appropriate. FEMA has not clarified what other analyses it will undertake to address GAO 2009 recommendations concerning data quality. GAO also found the ways in which WYO companies operate, including how companies compensate agents and third-party vendors (with which some companies contract to conduct some or all of the management of their NFIP policies) can affect a company's expenses and profits. Considering company characteristics would allow FEMA to more effectively develop its compensation methodology and determine the appropriate amounts to reimburse WYO companies as required by the Biggert-Waters Act. According to WYO companies and stakeholders, the current WYO arrangement and three potential alternatives GAO identified all involve trade-offs. Private insurers become WYO companies by signing a Financial Assistance/Subsidy Arrangement with FEMA and FEMA annually publishes terms for participation in the WYO program, including amounts companies will be paid for expenses. The current arrangement includes benefits for consumers from competition among approximately 75 WYO companies, but poses oversight challenges for FEMA due to the large number of companies. The three potential alternatives involve FEMA contracting with (1) one or more insurance companies to sell and service flood policies; (2) one vendor that would sell policies through agents and insurance companies would not be involved; or (3) multiple vendors to service policies while maintaining the WYO network to market and sell flood policies. All three potential alternatives would involve FEMA contracting with either WYO companies or vendors as federal contractors, a status that most WYO company representatives cited as creating more regulatory burden because of federal contract requirements. Representatives of most WYO companies and several stakeholders GAO interviewed preferred the current arrangement because of its predictability and noted that this characteristic would continue to encourage WYO company participation. GAO maintains that its 2009 recommendations remain valid and will help FEMA meet Biggert-Waters Act requirements. In this report, GAO recommends that FEMA take into account company characteristics when developing the new WYO compensation methodology. FEMA agreed with the recommendation. |
Federal agency collection or use of personal information is governed primarily by two laws: the Privacy Act of 1974 and the privacy provisions of the E-Government Act of 2002. The Privacy Act places limitations on agencies’ collection, disclosure, and use of personal information maintained in systems of records. The act describes a record as any item, collection, or grouping of information about an individual that is maintained by an agency and contains his or her name or another personal identifier. The act defines a “system of records” as a group of records under the control of any agency from which information is retrieved by the name of the individual or by an individual identifier. The Privacy Act requires that when agencies establish or make changes to a system of records, they must notify the public through a system-of- records notice in the Federal Register that identifies, among other things, the categories of data collected, the categories of individuals about whom information is collected, the intended “routine” uses of data, and procedures that individuals can use to review and correct personally identifiable information. Several provisions of the act require agencies to define and limit collection and use of personal information to predefined purposes. For example, it requires that, to the greatest extent practicable, personal information should be collected directly from the individual when it may affect that person’s rights or benefits under a federal program. It also requires agencies to indicate whether the individual’s disclosure of the information is mandatory or voluntary; the principal purposes for which the information is intended to be used; the routine uses that may be made of the information; and the effects on the individual, if any, of not providing the information. Further, in handling information they have collected, agencies are generally required to allow individuals to review their records, request a copy of their record, and request corrections to their information, among other things. The E-Government Act of 2002 was passed, among other reasons, to enhance the protection for personal information in government information systems or information collections by requiring that agencies conduct privacy impact assessments (PIA). PIAs are analyses of how personal information is collected, stored, shared, and managed in a federal system. Title III of the E-Government Act, known as the Federal Information Security Management Act of 2002 (FISMA), established a framework designed to ensure the effectiveness of security controls over information resources that support federal operations and assets. According to FISMA, each agency is responsible for, among other things, providing information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of information collected or maintained by or on behalf of the agency and information systems used or operated by an agency or by a contractor of an agency or other organization on behalf of an agency. These protections are to provide federal information and systems with integrity—preventing improper modification or destruction of information, confidentiality—preserving authorized restrictions on access and disclosure, and availability— ensuring timely and reliable access to and use of information. The privacy protections incorporated in the Privacy Act are based primarily on the Fair Information Practices—a set of widely recognized principles for protecting the privacy of personal information first developed by an advisory committee convened by the Secretary of Health, Education and Welfare in 1972 and revised by the Organization for Economic Cooperation and Development (OECD) in 1980. These practices underlie the major provisions of the Privacy Act and privacy laws and related policies in many countries, including Germany, Sweden, Australia, and New Zealand, as well as the European Union. They are also reflected in a variety of federal agency policy statements, beginning with an endorsement of the OECD principles by the Department of Commerce in 1981. The OECD version of the principles is shown in table 1. The Privacy Act gives the Office of Management and Budget (OMB) responsibility for developing guidelines and providing assistance to and oversight of agencies’ implementation of the act. OMB also has responsibility under the E-Government Act for developing PIA guidance and ensuring agency implementation of the PIA requirement. In July 1975, OMB issued guidance for implementing the provisions of the Privacy Act and has periodically issued additional guidance since then. OMB has also issued guidance on other data security and privacy-related issues including federal agency website privacy policies, interagency sharing of personal information, designation of senior staff responsible for privacy, data breach notification, and safeguarding personally identifiable information. Technological developments since the Privacy Act became law in 1974 have radically changed the way information is organized and shared among organizations and individuals. Such advances have rendered some of the provisions of the Privacy Act and the E-Government Act of 2002 inadequate to fully protect all personally identifiable information collected, used, and maintained by the federal government. For example, we reported in 2010 on privacy challenges associated with agencies using Web 2.0 technologies, such as web logs (“blogs”), social networking websites, video- and multimedia-sharing sites, and “wikis.” While the Privacy Act clearly applies to personal information maintained in systems owned and operated by the federal government, agencies often take advantage of commercial Web 2.0 offerings, in which case they have less control over the systems that maintain and exchange information, raising questions about whether personal information contained in those systems is protected under the act. While OMB subsequently issued guidance to federal agencies for protecting privacy when using web-based technologies, we reported in June 2011 that agencies had made mixed progress in updating privacy policies and assessing privacy risks associated with their use of social media services, as required by OMB’s guidance. A number of agencies had not updated their privacy policies or conducted PIAs relative to their Accordingly, use of third-party services such as Facebook and Twitter.we recommended that 8 agencies update their privacy policies and that 10 agencies conduct required PIAs. Most of the agencies agreed with our recommendations; however, 5 have not yet provided evidence that they have updated their privacy policies and 4 have not yet provided documentation that they have conducted PIAs. Another technology that has been increasingly used is data mining, which is used to discover information in massive databases, uncover hidden patterns, find subtle relationships in existing data, and predict future results. Data mining involves locating and retrieving information, including personally identifiable information, in complex ways. In September 2011, we reported that the Department of Homeland Security (DHS) needed to improve executive oversight of systems supporting counterterrorism. We noted that DHS and three of its component agencies—U.S. Customs and Border Protection, U.S. Immigration and Customs Enforcement, and U.S. Citizenship and Immigration Services—had established policies that largely addressed the key elements and attributes needed to ensure that their data mining systems were effective and provided necessary privacy protections. However, we also noted, among other things, that DHS faced challenges in ensuring that all of its privacy-sensitive systems had timely and up-to- date PIAs. We recommended that that DHS develop requirements for providing additional scrutiny of privacy protections for sensitive information systems that are not transparent to the public through PIAs and investigate whether the information-sharing component of a certain data-mining system, the U.S. Immigration and Customs Enforcement Pattern Analysis and Information Collection program, should be deactivated until a PIA is approved that includes the component. DHS has taken action to address both of these recommendations. Given the challenges in applying privacy laws and overseeing systems that contain personally identifiable information, the role of executives in federal departments and agencies charged with oversight of privacy issues is of critical importance. In 2008 we reported on agencies’ designation of senior officials as focal points with overall responsibility for privacy.organizational structures used by agencies to address privacy requirements and assess whether senior officials had oversight over key functions. Although federal laws and OMB guidance require agencies to designate a senior official for privacy with privacy oversight responsibilities, we found that the 12 agencies we reviewed had varying organizational structures to address privacy responsibilities and that Among other things, we were asked to describe the designated senior privacy officials did not always have oversight of all key privacy functions. Without such oversight, these officials may be unable to effectively serve as agency central focal points for information privacy. We recommended that six agencies take steps to ensure that their senior agency officials for privacy have oversight of all key privacy functions. Of the six agencies to which recommendations were made, four have provided evidence that they have fully addressed our recommendations. In 2008, we issued a report on the sufficiency of privacy protections afforded by existing laws and guidance, in particular the Privacy Act, the E-Government Act, and related OMB guidance.that while these laws and guidance set minimum requirements for agencies, they may not consistently protect personally identifiable information in all circumstances of its collection and use throughout the federal government and may not fully adhere to key privacy principles. We identified issues in three major areas: Applying privacy protections consistently to all federal collection and use of personal information. The Privacy Act’s definition of a system of records, which sets the scope of the act’s protections, does not always apply whenever personal information is obtained and processed by federal agencies. For example, if agencies do not retrieve personal information by identifier, as may occur in data-mining systems, the act’s protections do not apply. We previously reported that among the 25 agencies surveyed, the most frequently cited reason for collections of records not being considered Privacy Act systems of records was that the agency did not use a personal identifier to retrieve the information. Factors such as these have led experts to agree that the Privacy Act’s system-of-records construct is too narrowly defined. An alternative for addressing these issues could include revising the system-of-records definition to cover all personally identifiable information collected, used, and maintained systematically by the federal government. GAO, Privacy Act: OMB Leadership Needed to Improve Agency Compliance, GAO-03- 304 (Washington, D.C.: June 30, 2003). specified purpose. Yet current laws and guidance impose only modest requirements for describing the purposes for personal information and limiting how it is used. For example, agencies are not required to be specific in formulating purpose descriptions in their public notices. While purpose statements for certain law enforcement and antiterrorism systems might need to be phrased broadly enough so as not to reveal investigative techniques or the details of ongoing cases, very broadly defined purposes could allow for unnecessarily broad ranges of uses, thus calling into question whether meaningful limitations had been imposed. Examples for alternatives for addressing these issues include setting specific limits on the use of information within agencies and requiring agencies to establish formal agreements with external government entities before sharing personally identifiable information. Establishing effective mechanisms for informing the public about privacy protections. According to the openness principle, the public should be informed about privacy policies and practices, and the accountability principle calls for those who control the collection or use of personal information to be held accountable for taking steps to ensure privacy protection. Public notices are a primary means for establishing accountability for privacy protections and giving individuals a measure of control over the use of their personal information. Yet concerns have been raised that Privacy Act notices may not serve this function well. Although the Federal Register is the government’s official vehicle for issuing public notices, an expert panel convened for GAO questioned whether system-of-records notices published in the Federal Register effectively inform the public about government uses of personal information. Among others, options for addressing concerns about public notices could include setting requirements to ensure that purpose, collection, and use limitations are better addressed in the content of privacy notices and revising the Privacy Act to require that all notices be published on a standard website. Addressing these three areas could provide a number of benefits. First, ensuring that privacy protections are applied consistently to all federal collection and use of information could help ensure that information not retrieved by identifier (such as may occur in data-mining applications, for example) is protected in the same way as information retrieved by identifier. Further, limiting the use of personally identifiable information to a stated purpose could help ensure a proper balance between allowing government agencies to collect and use such information and limiting that collection and use to what is necessary and relevant. Lastly, a clear and effective notice can provide individuals with critical information about what personal data are to be collected, how they are to be used, and the circumstances under which they may be shared. An effective notice can also provide individuals with information they need to determine whether to provide their personal information (if voluntary), or who to contact to correct any errors that could result in an adverse determination about them. We noted that some of these issues—such as those dealing with limitations on use and mechanisms for informing the public—could be addressed by OMB through revisions of or supplements to existing guidance. However, we further stressed that unilateral action by OMB would not have the benefit of public deliberations regarding how best to strike an appropriate balance between the government’s need to collect, process, and share personally identifiable information and the rights of individuals to know about such collections and be assured that they are only for limited purposes and uses. Accordingly, we suggested that Congress consider amending applicable laws, such as the Privacy Act and E-Government Act, according to the alternatives we outlined, including revising the scope of the laws to cover all personally identifiable information collected, used, and maintained by the federal government; setting requirements to ensure that the collection and use of personally identifiable information is limited to a stated purpose; and establishing additional mechanisms for informing the public about privacy protections by revising requirements for the structure and publication of public notices. In commenting on a draft of our report, OMB officials noted that they shared our concerns about privacy and listed guidance that the agency has issued in the areas of privacy and information security. The officials stated that they believed it would be important for Congress to consider potential amendments to the Privacy and E-Government Acts in the broader contexts of other privacy statutes and that it would be important for Congress to evaluate fully the potential impact of revisions. In addition, in October 2011, you, the Chairman, introduced a bill to amend the Privacy Act. This bill—The Privacy Act Modernization for the Information Age Act of 2011—would, among other things, revise the Privacy Act to cover all personally identifiable information collected, used, and maintained by the federal government and ensure that collection and use of personally identifiable information is limited to a stated purpose. However, revisions to the Privacy and E-Government Acts have not yet been enacted. In addition to relevant privacy laws and federal guidance, a key component of protecting citizens’ personal information is ensuring the security of agencies’ information systems and the information they contain by, among other things, preventing data breaches and reporting those breaches when they occur. In 2006, in the wake of a security breach at the Department of Veterans Affairs resulting in the compromise of personal data on millions of U.S. veterans, we testified on preventing and responding to improper disclosures of personal information in the federal government. We observed that agencies can take a number of actions to help guard against the possibility that databases of personally identifiable information are compromised. In particular, we noted two key steps agencies should take: Develop PIAs whenever information technology is used to process personal information. These assessments are a tool for agencies to fully consider the privacy implications of planned systems and data collections before implementation, when it may be easier to make critical adjustments. Ensure the implementation of a robust information security program as required by FISMA. Such a program includes periodic risk assessments; security awareness training; security policies, procedures, and practices, as well as tests of their effectiveness; and procedures for addressing deficiencies and for detecting, reporting, and responding to security incidents. We also noted that data breaches could be prevented by limiting the collection of personal information, limiting the time such data are retained, limiting access to personal information and training personnel accordingly, and considering the use of technological controls such as encryption when data need to be stored on mobile devices. OMB subsequently issued guidance that specifies minimum agency practices for using encryption to protect personally identifiable information. Memorandums M-06-15, Safeguarding Personally Identifiable information, and M-06-16, Protection of Sensitive Agency Information, reiterated existing agency responsibilities to protect personally identifiable information, and directed agencies to encrypt data on mobile computers and devices and follow National Institute of Standards and Technology (NIST) security guidelines regarding personally identifiable information that is accessed outside an agency’s physical perimeter. In addition, OMB issued memorandum M-07-16, Safeguarding Against and Responding to the Breach of Personally Identifiable Information, which restated the M-06-16 recommendations as requirements and also required the use of NIST-certified cryptographic modules for encrypting sensitive information In 2008, we reported on the extent to which 24 major agencies had implemented encryption technologies. We found that agencies’ implementation of encryption and development of plans to implement encryption of sensitive information varied, and that from July through September 2007, the agencies collectively reported that they had not yet installed encryption technology on about 70 percent of their laptop computers and handheld devices. Accordingly, we made recommendations to selected agencies to strengthen practices for planning and implementing the use of encryption. The agencies generally agreed with the recommendations and we have assessed that 6 of the 18 recommendations have been addressed. Despite preventive measures, data breaches can still occur, and when they do it is critical that proper response policies and procedures be in place. We testified in 2006 that notification to individuals affected by data breaches and/or the public has clear benefits, such as allowing people to take steps to protect themselves from identity theft. Such notification is consistent with agencies’ responsibility to inform individuals about how their information is being accessed and used, and it promotes accountability for privacy protection. GAO-06-833T. information to US-CERT within 1 hour of discovery of the incident. In addition, OMB memorandum M-07-16 requires agencies to develop and implement breach notification policies governing how and under what circumstances affected parties are notified in the event of a data breach. Further, in a memorandum issued in September 2006, OMB recommended that agencies establish a core management group responsible for responding to the loss of personal information. OMB also established requirements for reporting breaches within the government. In memorandum M-06-20, FY 2006 Reporting Instructions for the Federal Information Security Management Act and Agency Privacy Management, OMB asked agencies to identify in their annual FISMA reports any physical or electronic incidents involving the loss of or unauthorized access to personally identifiable information. Agencies are also required to report numbers of incidents for the reporting period, the number of incidents the agency reported to US-CERT, and the number reported to law enforcement. In 2007 we reported that while requiring agencies to notify affected consumers of a data breach may encourage better security practices and help mitigate potential harm, it also presents certain costs and challenges. Federal banking regulators and the President’s Identity Theft Task Force had advocated a notification standard—the conditions requiring notification—that was risk based, allowing individuals to take appropriate measures where the risk of harm existed, while ensuring they are only notified in cases where the level of risk warrants such action. Use of such a risk-based standard could avoid undue burden on organizations and unnecessary and counterproductive notifications to consumers about breaches that present little risk. Over the last several years, we have continued to report that federal agency systems are vulnerable to cyber attacks and the potential compromise of sensitive information, including personally identifiable For fiscal year 2011, agency inspector general and GAO information.assessments of information security controls revealed that most major federal agencies had weaknesses in most of five major categories of information system controls. Further, over the past several years, we and agency inspectors general have made hundreds of recommendations to resolve similar previously identified significant control deficiencies. We have also recommended that agencies fully implement comprehensive, agency-wide information security programs as required by FISMA, including by correcting weaknesses in specific areas of their programs. The effective implementation of these recommendations will strengthen the security posture at these agencies, which will in turn help ensure the protection of personally identifiable information they collect and use. Federal agencies have also reported increasing numbers of security incidents that placed sensitive information at risk, with potentially serious impacts on federal operations, assets, and people. Over the past 6 years, the number of incidents reported by federal agencies to US-CERT has increased from 5,503 incidents in fiscal year 2006 to 42,887 incidents in fiscal year 2011, an increase of nearly 680 percent. (See fig. 1.) Of the incidents occurring in 2011, 15,560 involved unauthorized disclosure of personally identifiable information, a 19 percent increase over the 13,017 personally identifiable information incidents that occurred in 2010. Reported attacks and unintentional incidents involving federal, private and critical infrastructure systems involve a wide range of incidents including data loss or theft, computer intrusions, and privacy breaches, underscoring the need for improved security practices. The following examples from news media and other public sources illustrate some of the risks: In May 2012, the Federal Retirement Thrift Investment Board reported a sophisticated cyber attack on a computer belonging to a third party, which provided services to the Thrift Savings Plan. As a result of the attack, 123,000 participants had their personal information accessed. According to the board, the information accessed included 46,587 individuals’ names, addresses, and Social Security numbers, and 79,614 individuals’ Social Security numbers and other Thrift Savings Plan-related information. In April 2012, hackers breached a server at the Utah Department of Health to access thousands of Medicaid records. Included in the breach were Medicaid recipients and clients of the Children’s Health Insurance Plan. About 280,000 people had their Social Security numbers exposed. In addition, another 350,000 people listed in the eligibility inquiries may have had other sensitive data stolen, including names, birth dates, and addresses. In March 2012, a news wire service reported that the senior commander of the North Atlantic Treaty Organization (NATO) had been the target of repeated cyber attacks using Facebook that were believed to have originated in China. According to the article, hackers repeatedly tried to dupe those close to the commander by setting up fake Facebook accounts in his name in the hope that his acquaintances would make contact and answer private messages, potentially divulging sensitive information about the commander or themselves. In March 2012, it was reported that Blue Cross Blue Shield of Tennessee paid out a settlement of $1.5 million to the U.S. Department of Health and Human Services arising from potential violations stemming from the theft of 57 unencrypted computer hard drives that contained protected health information of over 1 million individuals. Incidents such as these illustrate that sensitive personally identifiable information remains at risk and that improved protections are needed to ensure the privacy of information collected by the government. While OMB has taken steps through the guidance I described to set requirements for agencies to follow, it is unclear the extent to which all agencies, including smaller agencies such as the Federal Retirement Thirst Investment Board, are adhering to OMB’s guidelines. In summary, ensuring the privacy and security of personal information collected by the federal government remains a challenge, particularly in light of the increasing dependence on networked information systems that can store, process, and transfer vast amounts of data. These challenges include updating federal laws and guidance to reflect current practices for collecting and using information while striking an appropriate balance between privacy concerns and the government’s need to collect information from individuals. They also involve implementing sound practices for securing and applying privacy protection principles to federal systems and the information they contain. Without sufficient attention to these matters, Americans’ personally identifiable information remains at risk. Chairman Akaka, Ranking Member Johnson, and members of the Subcommittee, this concludes my statement. I would be happy to answer any questions you have at this time. If you have any questions regarding this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov. Other key contributors to this statement include John de Ferrari, Assistant Director; Melina Asencio; Sher’rie Bacon; Anjalique Lawrence; Kathleen Lovett Epperson; Lee McCracken; David Plocher; and Jeffrey Woodward. Cybersecurity: Challenges in Securing the Electricity Grid. GAO-12-926T. Washington, D.C.: July, 17, 2012. Cybersecurity: Threats Impacting the Nation. GAO-12-666T. Washington, D.C.: April 24, 2012. Information Security: Additional Guidance Needed to Address Cloud Computing Concerns. GAO-12-130T. Washington, D.C.: October 6, 2011. Information Security: Weaknesses Continue Amid New Federal Efforts to Implement Requirements. GAO-12-137. Washington, D.C.: October 3, 2011. Personal ID Verification: Agencies Should Set a Higher Priority on Using the Capabilities of Standardized Identification Cards. GAO-11-751. Washington, D.C.: September 20, 2011. Data Mining: DHS Needs to Improve Executive Oversight of Systems Supporting Counterterrorism. GAO-11-742. Washington, D.C.: September 7, 2011. Cybersecurity: Continued Attention Needed to Protect Our Nation’s Critical Infrastructure. GAO-11-865T. Washington, D.C.: July 26, 2011. Defense Department Cyber Efforts: DOD Faces Challenges in Its Cyber Activities. GAO-11-75. Washington, D.C.: July 25, 2011. Information Security: State Has Taken Steps to Implement a Continuous Monitoring Application, but Key Challenges Remain. GAO-11-149. Washington, D.C.: July 8, 2011. Social Media: Federal Agencies Need Policies and Procedures for Managing and Protecting Information They Access and Disseminate. GAO-11-605. Washington, D.C.: Jun 28, 2011. Cybersecurity: Continued Attention Needed to Protect Our Nation’s Critical Infrastructure and Federal Information Systems. GAO-11-463T. Washington, D.C.: March 16, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Information Security: Federal Agencies Have Taken Steps to Secure Wireless Networks, but Further Actions Can Mitigate Risk. GAO-11-43. Washington, D.C.: November 30, 2010. Cyberspace Policy: Executive Branch Is Making Progress Implementing 2009 Policy Review Recommendations, but Sustained Leadership Is Needed. GAO-11-24. Washington, D.C.: October 6, 2010. Privacy: OPM Should Better Monitor Implementation of Privacy-Related Policies and Procedures for Background Investigations. GAO-10-849. Washington, D.C.: September 7, 2010. Information Management: Challenges in Federal Agencies’ Use of Web 2.0 Technologies. GAO-10-872T. Washington, D.C.: July 22, 2010. Cyberspace: United States Faces Challenges in Addressing Global Cybersecurity and Governance. GAO-10-606. Washington, D.C.: July 2, 2010. Cybersecurity: Continued Attention Is Needed to Protect Federal Information Systems from Evolving Threats. GAO-10-834T. Washington, D.C.: June 16, 2010. Information Security: Federal Guidance Needed to Address Control Issues with Implementing Cloud Computing. GAO-10-513. Washington, D.C.: May 27, 2010. Information Security: Concerted Effort Needed to Consolidate and Secure Internet Connections at Federal Agencies. GAO-10-237. Washington, D.C.: March 12, 2010. Cybersecurity: Progress Made but Challenges Remain in Defining and Coordinating the Comprehensive National Initiative. GAO-10-338. Washington, D.C.: March 5, 2010. National Cybersecurity Strategy: Key Improvements Are Needed to Strengthen the Nation’s Posture. GAO-09-432T. Washington, D.C.: March 10, 2009. Health Information Technology: HHS Has Taken Important Steps to Address Privacy Principles and Challenges, Although More Work Remains. GAO-08-1138, Washington, D.C.: September 17, 2008. Information Security: Federal Agency Efforts to Encrypt Sensitive Information Are Under Way, but Work Remains. GAO-08-525. Washington, D.C.: June 27, 2008. Privacy: Congress Should Consider Alternatives for Strengthening Protection of Personally Identifiable Information. GAO-08-795T. Washington, D.C.: June 18, 2008. Privacy: Agencies Should Ensure That Designated Senior Official Have Oversight of Key Functions. GAO-08-603. Washington, D.C.: May 30, 2008. Privacy: Alternatives Exist for Enhancing Protection of Personally Identifiable Information. GAO-08-536. Washington, D.C.: May 19, 2008. Health Information Technology: Efforts Continue but Comprehensive Privacy Approach Needed for National Strategy. GAO-07-988T. Washington, D.C.: June 19, 2007. Personal Information: Data Breaches are Frequent, but Evidence of Resulting Identity Theft Is Limited; However, the Full Extent is Unknown. GAO-07-737. Washington, D.C.: June 4, 2007. Privacy: Lessons Learned about Data Breach Notification. GAO-07-657. Washington, D.C.: April 30, 2007. Homeland Security: Continuing Attention to Privacy Concerns Is Needed as Programs Are Developed. GAO-07-630T. Washington, D.C.: March 21, 2007. Health Information Technology: Early Efforts Initiated but Comprehensive Privacy Approach Needed for National Strategy. GAO-07-238. Washington, D.C.: January 10, 2007. Privacy: Preventing and Responding to Improper Disclosures of Personal Information. GAO-06-833T. Washington, D.C.: June 8, 2006. Data Mining: Agencies Have Taken Key Steps to Protect Privacy in Selected Efforts, but Significant Compliance Issues Remain. GAO-05- 866. Washington, D.C.: August 15, 2005. Privacy Act: OMB Leadership Needed to Improve Agency Compliance. GAO-03-304. Washington, D.C.: June 30, 2003. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The federal government collects and uses personal information on individuals in increasingly sophisticated ways, and its reliance on information technology (IT) to collect, store, and transmit this information has also grown. While this enables federal agencies to carry out many of the governments critical functions, concerns have been raised that the existing laws for protecting individuals personal information may no longer be sufficient given current practices. Moreover, vulnerabilities arising from agencies increased dependence on IT can result in the compromise of sensitive personal information, such as inappropriate use, modification, or disclosure. GAO was asked to provide a statement describing (1) the impact of recent technology developments on existing laws for privacy protection in the federal government and (2) actions agencies can take to protect against and respond to breaches involving personal information. In preparing this statement, GAO relied on previous work in these areas as well as a review of more recent reports on security vulnerabilities. Technological developments since the Privacy Act became law in 1974 have changed the way information is organized and shared among organizations and individuals. Such advances have rendered some of the provisions of the Privacy Act and the E-Government Act of 2002 inadequate to fully protect all personally identifiable information collected, used, and maintained by the federal government. For example, GAO has reported on challenges in protecting the privacy of personal information relative to agencies use of Web 2.0 and data-mining technologies. While laws and guidance set minimum requirements for agencies, they may not protect personal information in all circumstances in which it is collected and used throughout the government and may not fully adhere to key privacy principles. GAO has identified issues in three major areas: Applying privacy protections consistently to all federal collection and use of personal information. The Privacy Acts protections only apply to personal information when it is considered part of a system of records as defined by the act. However, agencies routinely access such information in ways that may not fall under this definition. Ensuring that use of personally identifiable information is limited to a stated purpose. Current law and guidance impose only modest requirements for describing the purposes for collecting personal information and how it will be used. This could allow for unnecessarily broad ranges of uses of the information. Establishing effective mechanisms for informing the public about privacy protections. Agencies are required to provide notices in the Federal Register of information collected, categories of individuals about whom information is collected, and the intended use of the information, among other things. However, concerns have been raised whether this is an effective mechanism for informing the public. The potential for data breaches at federal agencies also pose a serious risk to the privacy of individuals personal information. OMB has specified actions agencies should take to prevent and respond to such breaches. In addition, GAO has previously reported that agencies can take steps that include assessing the privacy implications of a planned information system or data collection prior to implementation; ensuring the implementation of a robust information security program; and limiting the collection of personal information, the time it is retained, and who has access to it, as well as implementing encryption. However, GAO and inspectors general have continued to report on vulnerabilities in security controls over agency systems and weaknesses in their information security programs, potentially resulting in the compromise of personal information. These risks are illustrated by recent security incidents involving individuals personal information. Federal agencies reported 13,017 such incidents in 2010 and 15,560 in 2011, an increase of 19 percent. GAO previously suggested that Congress consider amending applicable privacy laws to address identified issues. GAO has also made numerous recommendations to agencies over the last several years to address weaknesses in policies and procedures related to privacy and to strengthen their information security programs. |
MTSA was landmark legislation that mandated a quantum leap in security preparedness for America’s maritime ports. Prior to the terrorist attacks of September 11, 2001, federal attention at ports tended to focus on navigation and safety issues, such as dredging channels and environmental protection. While the terrorist attacks initially focused the nation’s attention on the vulnerability of its aviation system, it did not take long for attention to fall on the nation’s ports as well. Besides being gateways through which dangerous materials could enter the country, ports represent attractive targets for other reasons: they are often large and sprawling, accessible by water and land, close to crowded metropolitan centers, and interwoven with highways, roads, factories, and businesses. Security is made more difficult by the many stakeholders, public and private, involved in port operations. These stakeholders include local, state, and federal agencies; multiple law enforcement jurisdictions; transportation and trade companies; and factories and other businesses. Passed in November 2002, MTSA imposed an ambitious schedule of requirements on a number of federal agencies. MTSA called for a comprehensive security framework—one that included planning, personnel security, and careful monitoring of vessels and cargo. (See table 2 for examples of key MTSA activities.) MTSA tasked the Secretary of DHS, and the Secretary in turn has tasked the Coast Guard, with lead responsibility for the majority of its requirements. Timetables were often daunting. For example, one of the Coast Guard’s responsibilities was to develop six interim final rules implementing MTSA’s operational provisions in sufficient time to receive public comment and to issue a final rule by November 25, 2003. Adding to the difficulty has been the need to implement MTSA against the backdrop of the most extensive federal reorganization in over a half- century. Most of the agencies with MTSA responsibilities were reorganized into the Department of Homeland Security in March 2003, less than 5 months after MTSA enactment. Among the 22 agencies in the new department were some relatively new organizations, such as TSA. Other more longstanding agencies, including the Coast Guard, U.S. Customs Service, and Immigration and Naturalization Service, were transferred from a variety of executive departments. This vast recombination of organizational cultures introduced new chains of command and reporting responsibilities. MTSA implementation also involved coordination with other executive agencies, including the Departments of State, Transportation, and Justice. Since the passage of MTSA in 2002 the responsible agencies—primarily the Coast Guard, TSA, and BCBP in DHS, along with MARAD in the Department of Transportation—have made strides in implementing the act’s security provisions. MTSA called for actions in 46 key areas we identified. Thus far, we have received information from the responsible agencies on 43 of these areas. Of the 43 areas, work is done in 2 (issuing interim rules and developing training for maritime security personnel), and under way in 40 others. These agencies also reported that cooperation and coordination has been extensive throughout the course of their activities. A major achievement has been the Coast Guard’s publication on July 1, 2003, of six interim rules on the provisions where it had lead responsibility. The rules set requirements for many of the provisions delegated to the Coast Guard under MTSA. The rules, which included sections on national maritime security initiatives, area maritime security, vessel security, facility security, outer continental shelf facility security, and automatic identification systems, were published approximately 8 months after MTSA was enacted. Doing so kept the Coast Guard on schedule for meeting MTSA’s requirement to receive public comment and issue the final rules by the end of November 2003. The rules provided a comprehensive description of industry-related maritime security requirements and the cost-benefit assessments of the entire set of rules. The Coast Guard plans to publish the final rules before November 25, 2003, after receiving and acting on comments to the interim rules. Another Coast Guard accomplishment was the establishment of Maritime Safety and Security Teams called for under MTSA. These teams, which can be rapidly deployed where needed, are designed to provide antiterrorism protection for strategic shipping, high-interest vessels, and critical infrastructure. The Coast Guard has already deployed four teams—in Seattle and Galveston and near Norfolk and Los Angeles. The Coast Guard will deploy teams in New York City and near Jacksonville this year, and six more teams have been requested in the president’s budget in 2004. These are to be located in San Diego, Honolulu, Boston, San Francisco, New Orleans, and Miami. Other agencies in DHS have also made progress in their implementation of MTSA provisions. Responding to MTSA’s requirement for the development of biometric transportation security identification cards that would allow only authorized persons access to secure areas of vessels or facilities, TSA is currently testing several different technology credentialing systems on sample cards. The agency will begin testing prototypes of the entire security card process, including conducting background checks, collecting biometric information on workers, verifying cardholders’ identities, and issuing cards in early 2004. TSA plans to start issuing about 5 to 6 million new cards per year in the middle of 2004. Developing all of the policies and programs to make this system work is still under way and will continue to pose challenges to continued progress. Another DHS agency, BCBP, was delegated the responsibility for issuing regulations for electronic transmission of cargo information to BCBP by October 1, 2003; BCBP published its proposed rule on July 23, 2003. BCBP was waiting for comments on the proposed rule, and BCBP officials told us that they expect to publish the rule on time. MARAD has also made progress in its requirements. Among the provisions for which MARAD is responsible are developing standards and curricula for the training of maritime security personnel. MARAD submitted a Report to Congress, dated May 2003, containing the standards and curriculum called for by MTSA in the form of model course frameworks for seven categories of maritime security professionals. As an extension of the MTSA project, MARAD also produced three model maritime security courses for the International Maritime Organization (IMO). An IMO validation team has reviewed drafts of these courses, which found little need for change. Agency officials told us that cooperation and coordination on MTSA implementation has been strong. Coast Guard officials said that they had developed channels of communication with other relevant agencies, and they said these other agencies were supportive in implementing provisions for which they did not have primary responsibility. In the work we have conducted at ports since the September 11th attacks, we have noted an increasing level of cooperation and coordination at the port level. However, ensuring smooth coordination as the many aspects of MTSA implementation continue is a considerable challenge. Additional work will be needed to determine the extent to which this spirit of cooperation continues to be translated into effective actions at the level where programs must be implemented. While progress is being made, our preliminary work has identified five areas that merit attention and further oversight. Three relate primarily to security issues: (1) the limited number of ports that will be covered by the vessel identification system, (2) questions about the scope and quality of port security assessments, and (3) the Coast Guard’s plans not to individually approve security plans for foreign vessels. The remaining two relate primarily to operational and efficiency matters: (1) potential duplication of maritime intelligence efforts and (2) inconsistency with Port Security Grant Program requirements. The main security-related issue involves the implementation of a vessel identification system. MTSA called for the development of an automatic identification system. Coast Guard implementation calls for a system that would allow port officials and other vessels to determine the identity and position of vessels entering or operating within the harbor area. Such a system would provide an “early warning” of an unidentified vessel or a vessel that was in a location where it should not be. To implement the system effectively, however, requires considerable land-based equipment and other infrastructure that is not currently available in many ports. As a result, for the foreseeable future, the system will be available in less than half of the 25 busiest U.S. ports. The identification system, called the Automatic Identification System (AIS), uses a device aboard a vessel to transmit a unique identifying signal to a receiver located at the port and to other ships in the area. This information gives port officials and other vessels nearly instantaneous information about a vessel’s identity, position, speed, and course. MTSA requires that vessels in certain categories install tracking equipment between January 1, 2003, and December 31, 2004, with the specific date dependent on the type of vessel and when it was built. The only ports with the necessary infrastructure to use AIS are those that have waterways controlled by Vessel Traffic Service (VTS) systems. Similar to air traffic control systems, VTS uses radar, closed circuit television, radiophones, and other technology to allow monitoring and management of vessel traffic from a central shore-based location. The Coast Guard currently plans to install AIS receiving equipment at the 10 locations with VTS systems. More than half of the 25 busiest ports, such as Philadelphia, Baltimore, Miami, Charleston, Tampa, and Honolulu, do not have VTS systems; hence, AIS will be inoperable at these locations for the foreseeable future. When AIS will be operable at these other ports depends heavily on how soon the Coast Guard can put an extensive amount of shore-based infrastructure in place. For the present, the Coast Guard is requiring AIS equipment only for (1) vessels on international voyages and (2) vessels navigating waterways under VTS control. Some of these international ships will be calling on ports that will not have AIS equipment. In such cases, the transmitters aboard the vessels will be of no use for the ports, because they will not have equipment to receive the signals. Cost is a major factor in the full implementation of AIS. Expanding coverage will require substantial additional investment, both public and private. The Coast Guard’s budget request for fiscal year 2004 includes $40 million for shore-based AIS equipment and related infrastructure—an amount that covers only current VTS areas. According to a Coast Guard official, wider-reaching national implementation of AIS would involve installation and training costs ranging from $62 million to $120 million. Also, the cost of installing AIS equipment aboard individual ships averages about $10,000 per vessel, which is to be borne by the vessel owner or operator. Some owners and operators, particularly of domestic vessels, have complained about the cost of equipping their vessels. Another security-related issue involves the Coast Guard’s efforts to address MTSA’s security planning requirements through a series of security assessments of individual ports. Security assessments are intended to be in-depth examinations of security threats, vulnerabilities, consequences, and conditions throughout a port, including not just transportation facilities, but also factories and other installations that pose potential security risks. The Coast Guard had begun these assessments before MTSA was passed and decided to continue the process, changing it as needed to meet MTSA planning requirements, which include developing area security plans based on the evaluation of specific facilities throughout the port. At the request of the Subcommittee on Coast Guard and Maritime Transportation, House Committee on Transportation and Infrastructure, we have been examining these assessments, which are being conducted by an outside contractor. Our preliminary work has surfaced several potential concerns, which we are still in the process of reviewing. One concern involves an apparent truncation of the review process for ensuring that the assessment methodology will deliver what MTSA requires. When MTSA took effect, the outside contractor already completed the first 10 of 55 planned assessments. The Coast Guard directed the contractor to modify the assessment methodology to take MTSA’s planning requirements into account, and it decided that the next two assessments would be a pilot test of the revised methodology. The Coast Guard plans to use the pilot test to evaluate lessons learned, so that additional modifications can be made before any further contracts are signed. Instead of waiting to see what changes might be needed as a result of the pilot projects, however, the contractor has apparently started the scoping phase for the next six port assessments. Scoping is a significant part of the new methodology, and as such, it is a major determinant in the nature and breadth of the issues to be addressed, as well as the assessment’s cost. The contractor has also reportedly sought to negotiate and sign contracts to review the next six ports. Since the pilot projects will not be completed until at least October 2003, it seems premature to reach decisions about the scope of the assessments and sign contracts for them. The revised methodology needs to be reviewed so that any needed changes are reflected in the next contract. A second concern that has surfaced involves the scope and quality of the assessments themselves. As part of our work, we have interviewed port stakeholders to obtain their views on the process. At one port, where the assessment has been completed and the report issued, stakeholders said they had not been given an opportunity to comment on the report, which contained factual errors and did not include an assessment of railroads and the local power generating plant. At the other port, where the assessment was still in process, local Coast Guard personnel and port stakeholders noted that a survey instrument referred to the wrong port, asked questions they regarded as not pertaining to security, and was conducted in ways that raised concerns about credibility. Many of these stakeholders saw little usefulness in the assessments, believing that they added little to what the stakeholders had already learned from conducting their own more extensive security reviews of individual facilities or installations. They said the assessments focused on the same systems that had already been reviewed and would have greater value if they were focused on matters that had not already been thoroughly studied, such as the potential for waterborne assault. Coast Guard officials at the two ports said, however, that in their view the assessments would provide such benefits as a more comprehensive perspective on port operations and vulnerabilities and validate their need for additional assets and people to provide adequate security. Ensuring that the assessments are of high quality is important not only for their effectiveness as security instruments, but also because of their cost. For the most part, assessments have been conducted only at medium-sized ports, and even there they are costing $1 million or more per assessment. Concerns have been raised about the proposed approach for meeting MTSA’s requirement that the Secretary of DHS approve vessel security plans for all vessels operating in U.S. waters. Vessel security plans include taking such steps as responding to assessed vulnerabilities, designating security officers, conducting training and drills, and ensuring that appropriate preventive measures will be taken against security incidents. To implement this MTSA requirement the Coast Guard has stated, in general, that it is not the Coast Guard’s intent to individually approve vessel security plans for foreign vessels. Separate from MTSA, an international agreement requires vessels to carry on board a vessel security plan that is approved by the vessel’s country of registry—its “flag” state—to ensure that an acceptable security plan is in place. The Coast Guard provides that it will deem a flag state approval of a vessel security plan to constitute the MTSA-required Secretary approval of MTSA vessel security plans. However, MTSA does not mention any role for foreign nations in the Secretary’s required approval of vessel security plans, and some concerns have been raised about the advisability of allowing flag states—some with a history of lax regulation—to ensure the security of vessels traveling to the United States. The international requirement for a security plan is contained in the International Ship and Port Facility Security (ISPS) Code. Under this requirement, which was adopted about the same time that MTSA was enacted and will go into effect on July 1, 2004, the vessel’s flag state is responsible for reviewing and certifying the vessel’s security plan. Prior to this time, the vessels’ flag state had already been responsible for ensuring that its vessels met safety requirements. Critics of using this approach for MTSA-required security plans have pointed out that in the past, some flag states had a spotty record of enforcing safety requirements. Rather than individually approving security plans for vessels overseen by foreign flag states, the Coast Guard plans an extensive monitoring effort as part of its oversight of vessels bound for U.S. waters. However, the Coast Guard’s interim rule stated that, as part of an aggressive port state control program, the Coast Guard would verify that foreign vessels have an approved, fully implemented security plan, as well as tracking the performance of owners, operators, flag administrations, charters, and port facilities. Coast Guard officials have said that they are working from existing procedures, in that their security effort is modeled after their safety program. They also said, however, that they have no contingency plans in case stronger measures than those called for in their current plans are required. The concerns are limited mainly to foreign flag vessels. Vessels registered in the United States will have their security plans reviewed and approved by the Coast Guard. It has been reported that the Coast Guard estimates that review and approval of security plans for domestic vessels and facilities will require 150 full-time personnel and cost $70 million as part of its 2004 budget. Turning to issues that are related more to program efficiency and management than to security concerns, one issue that has arisen involves potential duplication in the area of maritime intelligence. MTSA required the Secretary of Homeland Security to implement a system to collect, integrate, and analyze information on vessels operating on or bound for U.S. waters. The Secretary of DHS in turn delegated responsibilities to TSA and the Coast Guard. There appears to be potential for duplication by TSA and the Coast Guard in these efforts. The duplication concerns center on the new Integrated Maritime Information System (IMIS) required under the Secretary’s delegations. The Secretary of DHS delegated primary responsibility for this system to TSA, and TSA was appropriated $25 million to develop it. Coast Guard officials have voiced concerns that TSA’s efforts in developing the overall system are duplicating existing Coast Guard efforts that are more extensive and better funded. According to these officials, IMIS is very similar to the Coast Guard’s Intelligence Coordination Center (ICC) Coastwatch program, an effort that has 10 times the amount of funding appropriated for IMIS, involves 100 more staff members, and has staff already in place with considerable intelligence analysis capability. Coast Guard officials questioned whether TSA’s smaller effort could yield information of similar quality. Coast Guard officials also expressed concerns about potential duplication of effort at the port level. TSA’s tests of the system would place TSA personnel at the port level. Coast Guard personnel noted that these efforts seemed similar to the Coast Guard’s Field Intelligence Support Teams, as well as teams from the legacy agencies, the Customs Service and the Immigration and Naturalization Service, that also operate at the port level. Coast Guard officials said that they saw little sharing of the intelligence at that level. While we have not yet had the opportunity to observe the intelligence arms of TSA and the Coast Guard in action to more fully evaluate the potential for duplication of effort, it does appear that some potential duplication exists. From conversations with TSA and Coast Guard officials, we could discern little difference in a number of their information and integration efforts. Aside from potential inefficient use of resources, this possible duplication may also limit either agency from obtaining a complete intelligence picture and detecting potential threats. The final issue involves TSA’s implementation of MTSA’s grant program. MTSA required the Secretary of Transportation to establish a program of grants to ports and other entities to implement area and facility-specific security plans. Prior to the enactment of MTSA, TSA, in partnership with MARAD and the Coast Guard, already had begun a port security grant program in February 2002. This program was originally intended to fund security assessments and enhanced operational security at ports and facilities, and two rounds of grants were funded before MTSA was enacted in November 2002. TSA officials told us that, rather than creating a new grant program to specifically respond to MTSA, they are adapting the existing program to meet MTSA requirements. Under this approach, some time will elapse before all of the grant requirements specified under MTSA are in place. The existing grant program differs from MTSA requirements in several respects. Most significantly, the existing grant program does not require cost-sharing, while MTSA does. MTSA grant provisions state that for projects costing more than $25,000, federal funds for any eligible project shall not exceed 75 percent of the total cost. A TSA official said that, in starting to fold MTSA grants into the existing program for the third round of grants, TSA was still disbursing moneys from a prior appropriation, and the language of that legislation limited its ability to make changes that would meet MTSA requirements. As a result, TSA encouraged cost-sharing but did not require it. While TSA limited its changes for the first three rounds of grants, in the future continued deviation from MTSA cost- sharing requirements would keep federal dollars from reaching as many projects as possible. By not requiring a grantee to share in the financial burden, TSA does not take into account the applicant’s ability to participate in the funding. If applicants have such ability, the result is that available federal dollars are not effectively leveraging as many projects as possible. There are two additional areas where TSA’s current grant program differs from MTSA provisions. First, the current grant program does not specifically correspond to the stated purpose of MTSA’s grant funding, which is to implement area and facility-specific security plans. TSA officials told us that in round three, they would give preference to regulated facilities and vessels that were already required to have security assessments and plans in place. As a result, the grants would likely be for mitigating identified vulnerabilities rather than developing plans. Second, in the application instructions for the current program, TSA said that recurring costs for personnel and operations and maintenance costs were not eligible for funding. MTSA specifically includes these costs. TSA officials said that for later rounds of grants during fiscal year 2004, they would discuss potential changes in the Port Security Grant Program with the Coast Guard and MARAD. These potential changes would include requiring that all grant proposals be designed to meet MTSA port security grant requirements. The officials said, however, that before making any changes, they would look for specific directions accompanying currently pending appropriations for fiscal year 2004. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other members of the committee may have. For information about this testimony, please contact Margaret Wrightson, Director, Homeland Security and Justice Issues, at (415) 904-2000. Individuals making key contributions to this testimony include Jonathan Bachman, Jason Berman, Steven Calvo, Matthew Coco, Rebecca Gambler, Geoffrey Hamilton, Christopher Hatscher, Lori Kmetz, Stan Stenersen, and Randall Williamson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | After the events of September 11, 2001, concerns were raised over the security of U.S. ports and waterways. In response to the concerns over port security, Congress passed the Maritime Transportation Security Act in November 2002. The act created a broad range of programs to improve the security conditions at the ports and along American waterways, such as identifying and tracking vessels, assessing security preparedness, and limiting access to sensitive areas. A number of executive agencies were delegated responsibilities to implement these programs and other provisions of the act. The Senate Committee on Commerce, Science, and Transportation asked GAO to conduct a review of the status of the agencies' efforts to implement the security requirements of the act. This testimony reflects GAO's preliminary findings; much of GAO's work in the area is still under way. Agencies responsible for implementing the security provisions of the Maritime Transportation Security Act have made progress in meeting their requirements. Thus far, GAO has obtained information about 43 of 46 specific action areas, and efforts are under way in 42 of them. For example, the Coast Guard, the Department of Homeland Security agency with lead responsibility for most of the assignments, has published six interim rules covering responsibilities ranging from security of port facilities to vessel identification systems. Two other agencies within the new department--the Transportation Security Administration and the Bureau of Customs and Border Protection--have actions under way in such areas as establishing an identification system for millions of port workers and setting information requirements for cargo. The Maritime Administration, a Department of Transportation agency, has already completed or is well into implementing such responsibilities as developing training for security personnel. While much has been accomplished, GAO's review found five areas of concern. Three relate primarily to security issues: (1) only a limited number of ports covered by vessel identification system; (2) questions about the scope and quality of port security assessments; and (3) concerns related to approving security plans for foreign vessels. Two relate primarily to organizational and operational matters: (1) potential duplication of maritime intelligence efforts; and (2) inconsistency with Port Security Grant Program requirements. |
Medicare Part D coverage is provided through private sponsors that offer a choice of PDPs with different costs and coverage. The largest sponsors offer PDPs to beneficiaries throughout the United States and generally have experience in providing Medicare coverage and with call center operations. Under Part D, each PDP may offer the standard prescription drug benefit or coverage that is different, but at least actuarially equivalent, to the standard benefit. According to the Medicare Payment Advisory Commission, for 2006, 9 percent of PDPs offer the standard benefit, 48 percent offer a basic plan that has the same actuarial value as the standard benefit but with a different design, and 43 percent offer enhanced coverage that exceeds the standard benefit. Therefore, the specific premium, deductible, and copayment or coinsurance amounts, as well as the coverage gap—the period during which beneficiaries must pay 100 percent of their drug costs—of each PDP may vary. In addition, MMA and CMS regulations require plan formularies—the list of drugs a PDP covers—to meet certain standards, but within these standards, the drugs that are covered and the utilization management tools that are used to control costs may vary. If beneficiaries’ drugs are not on their PDP’s formulary, rather than paying full (retail) price for them, beneficiaries may switch to a similar drug that is on the formulary. Beneficiaries may also request that the plan make an exception to the formulary and cover their drugs. If the PDP denies that request, CMS regulations require that beneficiaries generally be able to appeal the decision to the sponsor. Although certain drugs may be on a PDP’s formulary, they may be subject to one or more of several utilization management tools—the most common of which are prior authorization, quantity limits, step therapy, and generic substitution. For drugs subject to prior authorization, beneficiaries need approval from their PDP before they can fill their prescription and for drugs subject to quantity limits, the plan limits the amount of the drug it covers over a certain period of time. For drugs subject to step therapy, the PDP requires that the beneficiary first try a less expensive drug for their condition before it will cover the beneficiary’s prescribed drug. Finally, generic substitution means that when there is a generic substitute available, the PDP will automatically provide the generic, unless the beneficiary’s doctor specifically orders the brand-name drug. To help cover costs under Part D, Medicare provides subsidies to certain low-income beneficiaries. For example, Medicare beneficiaries for whom Medicaid pays their Medicare Part B premium automatically receive the full low-income subsidy. This subsidy provides the beneficiary with reduced copayment amounts, covers any deductible, provides drug coverage during the coverage gap, and helps pay their PDP premium, up to a certain amount. Other Medicare beneficiaries, however, must apply for the low-income subsidy through the Social Security Administration, and may receive only a partial subsidy. For 2006, 79 sponsors are offering over 1,400 PDPs, each of which has been approved by CMS to ensure that it meets established standards. Ten of these sponsors are offering PDPs in all 34 PDP regions, and they account for nearly 62 percent of PDPs nationwide. The largest PDP sponsors are either in the commercial insurance or pharmacy benefit management and services sectors and generally have prior experience with call center operations. In addition, the largest PDP sponsors all have some prior experience with Medicare. Most offered a Medicare prescription drug discount card or partnered with a company and most offer Medicare Advantage plans. Almost all of the calls we placed were answered by a CSR with minimal delay. A limited number of calls were not answered by CSRs, mainly due to disconnections. Further, we found that most CSRs with whom we spoke were easy to understand, polite, and professional, and many provided helpful suggestions and information. Call centers generally provided prompt service in answering our calls. The wait time to reach a CSR was generally short—46 percent of the 864 calls CSRs fielded were answered in less than 1 minute and 96 percent of the calls were answered in less than 5 minutes (see fig. 1). Only 9 calls (1 percent) were answered in 10 minutes or more, with the longest wait time being 25 minutes (1 call). For a small number of calls—36 of the 900 calls we placed (4 percent)—we did not receive an answer to our questions because we did not reach a CSR. For almost all of these calls (33), this occurred because we were disconnected. CSRs generally provided courteous service. Our callers noted that many were helpful and friendly, and we found that CSRs were easy to understand, polite, and professional in 98 percent of the calls. In addition, if a CSR did not know or could not answer a question, many provided additional resources for obtaining the answer, most commonly during calls on the low-income subsidy (question 3). While CSRs did not provide an answer for over one-third of the calls for this question, in over 80 percent of these cases, CSRs suggested another source the caller could contact to obtain the answer—most commonly Medicare or the Social Security Administration. Many CSRs also provided callers with helpful suggestions that related to our questions. For example, during question 1 calls on the PDP comparison for a low-utilization beneficiary, CSRs provided information about a mail-order option to obtain drugs in 22 percent of the calls. For question 2 on the PDP comparison for a high-utilization beneficiary, CSRs provided the caller with information about lower-cost drugs in 41 percent of the calls and inquired as to whether the beneficiary was eligible for the low-income subsidy in 24 percent of the calls. CSRs at the 10 PDP sponsor call centers we contacted provided accurate and complete responses to about one-third of the calls they fielded, although the accuracy and completeness rates for each of the 10 sponsor call centers and for each of the five questions varied. CSRs were unable to provide an answer for 15 percent of the questions posed, primarily those related to plan costs. In addition, we found that CSRs within the same call centers sometimes provided inconsistent responses to our questions. Excluding the 4 percent of calls for which we did not reach a CSR, we obtained accurate and complete responses to 34 percent of the calls—294 of 864—and obtained incomplete responses to another 29 percent of the calls (see fig. 2). The overall accuracy and completeness rates for each of the 10 PDP sponsor call centers varied widely, ranging from 20 to 60 percent (see fig. 3). Only 1 sponsor call center had an overall accuracy and completeness rate of greater than 50 percent and 2 sponsor call centers had rates of 25 percent or less. No sponsor’s call center consistently had the highest or lowest accuracy and completeness rate for all questions. For example, although 1 call center had the highest accuracy and completeness rate for both question 1 (the PDP comparison for a low- utilization beneficiary) and question 2 (the PDP comparison for a high- utilization beneficiary), it had the second-lowest accuracy and completeness rate for question 4 (nonformulary drugs). Variation across call centers was due, in part, to differences in the resources that CSRs said were available to them. For example: In response to questions 1 and 2, CSRs at two call centers indicated that they were able to compute the annual cost of the least expensive plan because they had access to a computerized “cost calculator.” However, CSRs at other call centers stated that they could not compute an annual cost because they did not have access to such a resource. We located cost calculators on the Web sites of seven sponsors, each of which had call center CSRs who stated that they did not know or could not calculate an annual cost. CSRs at six different sponsor call centers stated that they could not calculate the annual cost of the least expensive plan because they did not have access to the retail prices of the beneficiary’s drugs. In contrast, CSRs at two other call centers stated that they did have access to these prices, and were able to use them in calculations. For each of the five questions, accuracy and completeness rates varied, but were generally low. They ranged from 14 to 60 percent (see fig. 4). Relatively few CSRs were able to accurately identify the least costly plan and calculate its annual cost. In addition, the annual cost estimates that CSRs provided were often substantially different from the plans’ actual costs. For example: For the low-utilization beneficiary (question 1), about 1 in 3 responses were incomplete; that is, CSRs identified the least costly plan, but either inaccurately calculated its annual cost or stated that they could not provide any annual cost. Over half of the CSRs that provided an inaccurate response quoted a cost that was greater than the actual cost. For the high-utilization beneficiary (question 2), about 3 in 10 responses were incomplete. Among the 23 CSRs that correctly identified the least costly plan, but gave an inaccurate annual cost, almost all provided a quote that was less than the actual cost, and in 11 cases over $1,000 less. About two-thirds of the CSRs were unable to accurately report whether the sponsor offered a PDP for which a Medicare beneficiary that received help from Medicaid would not have to pay a premium (question 3). Specifically, CSRs fielding this call answered inaccurately 31 percent of the time and did not provide an answer 35 percent of the time. For most of the inaccurate answers, CSRs stated that a certain PDP would not require a premium from the beneficiary, but, in fact, it would. Other inaccurate responses showed a poor understanding of the low-income subsidy benefit; for example, two CSRs incorrectly stated that the low-income subsidy did not help offset the premium at all. Half of the CSRs responding to question 4 incompletely described the options available to a beneficiary taking a nonformulary drug. Of the incomplete responses, about 4 in 5 CSRs mentioned that the beneficiary could request an exception to have the plan cover the nonformulary drug, but not that the beneficiary could switch to a drug that the plan covers. In addition, 15 percent of CSR responses included neither possibility, with many CSRs stating that the beneficiary’s only option would be to pay full price for nonformulary drugs. Finally, CSRs accurately described at least two utilization management tools in 60 percent of our calls for question 5—the highest accuracy and completeness rate of our five questions. Other CSRs identified, but could not accurately describe, specific tools. For example, one CSR incorrectly stated that quantity limits—a limit on the amount of a drug that the plan will cover over a certain period of time—means that a pharmacy may not have enough of a drug to fill the beneficiary’s prescription. Overall, CSRs stated that they did not know or could not answer our question for 15 percent of the calls. This was most common for the questions related to PDP costs (the PDP comparison for a low-utilization beneficiary, the PDP comparison for a high-utilization beneficiary, and the low-income subsidy). For question 2 calls regarding the PDP comparison for a high-utilization beneficiary, 30 percent of the CSRs stated that they were unable to tell the caller which PDP would cost the beneficiary the least annually. In contrast, only 8 percent of CSRs provided this response for question 1 on the low-utilization beneficiary. This difference in the percentage of calls for which an answer was provided is likely due to the added complexity of comparing PDPs and calculating the annual cost for a beneficiary using eight drugs versus a beneficiary using three drugs. However, reliance on at least five drugs is common in the Medicare population. Question 3 regarding the low-income subsidy had the highest “no answer provided” rate—35 percent. Of the CSRs that did not provide an answer to this question, almost all stated that they did not know the subsidy amount the beneficiary would receive. Because they did not recognize that beneficiaries with both Medicare and Medicaid automatically receive the full low-income subsidy, they could not effectively determine whether that subsidy would cover the sponsor’s PDP premiums. CSRs within the same call center sometimes provided inconsistent responses to our five questions. For example, within each of six different call centers, among CSRs who accurately identified the least costly plan for the low-utilization beneficiary (question 1), some CSRs calculated an accurate annual cost, some calculated an inaccurate annual cost, and others stated that they could not calculate an annual cost. In response to question 2 regarding the high-utilization beneficiary, different CSRs within five call centers identified each of their sponsor’s PDPs as the least costly. In addition, in response to questions 1 and 2, CSRs at three call centers told us that it was against the sponsor’s policies to identify any of their plans as having the lowest annual cost. However, other CSRs at each of these call centers did not cite this policy and did identify a plan as having the lowest annual cost. In part, these inconsistencies were due to differences in CSRs’ knowledge about their sponsor’s plans. For example, CSRs’ varying knowledge related to the low-income subsidy question (question 3) produced contradictory responses. Within each of the 10 sponsor call centers, different CSRs answered accurately, inaccurately, or stated that they did not know or could not answer the question. When asked about the options for a beneficiary using nonformulary drugs (question 4), different CSRs within each of 6 sponsor call centers stated that a beneficiary could either switch to a covered drug or apply for an exception, stated only that the beneficiary could switch to a covered drug, stated only that the beneficiary could apply for an exception, or stated neither possibility. Among CSRs that stated neither possibility, the specific responses differed. For example, at 1 of the above call centers, although five CSRs answered the question accurately, one erroneously stated that the beneficiary’s only option was to pay full price for nonformulary drugs, and another erroneously stated that any drugs not covered by the PDP would be covered under Medicare Part B. In answering question 5 on utilization management tools, different CSRs within the same call center provided varying descriptions of the utilization management tools PDPs use. For example, although four CSRs within one call center provided accurate descriptions of at least two tools, three other CSRs within this call center each provided a different, and inaccurate, description of utilization management tools. At another call center, two CSRs stated that they could not describe any tools without knowing the specific drugs the beneficiary was taking—even though eight other CSRs at that call center were able to accurately describe at least one tool without knowing the beneficiary’s drugs. Our calls to 10 of the largest PDP sponsors’ call centers show that Medicare beneficiaries face challenges in obtaining the information needed to make informed choices about the PDP that best meets their needs. Call center CSRs are expected to provide answers to drug benefit questions and comparative information about their sponsors’ PDP offerings. Yet we received accurate and complete responses to only about one-third of our calls. In addition, responses to the same question varied widely, both across and within call centers. Sponsor call centers’ poor performance on our five questions raises questions about whether the information they provide will lead beneficiaries to choose a PDP that costs them more than expected or has coverage that is different than expected. Rather than consider PDP options solely on the basis of the call centers’ information, callers may benefit from consulting other information sources available on Medicare Part D when seeking to understand and compare PDP options. CMS reviewed a draft of this report and provided written comments, which appear in appendix I. In its comments, CMS characterized our analysis as based on inaccurate, incomplete, and subjective methods that limit the report’s relevance and validity. However, CMS went on to say that despite its view on the study’s limitations, GAO is right to be concerned about whether beneficiaries are getting effective service from plan call centers. CMS asserted that our questions did not reflect the usual questions received by PDP sponsor call centers. As noted in the draft report, we selected topics that were addressed in the Frequently Asked Questions section of the Medicare.gov Web site and regarded by policy experts and beneficiary advocates as important to making an informed plan choice. Furthermore, at a May 2006 meeting with CMS officials, the agency’s Deputy Administrator stated that CSRs should be able to accurately answer all of the specific questions we posed during the study. CMS also stated that we asked for information that CSRs are not required to provide. Specifically, for questions 1 and 2 on PDP comparisons for low and high-utilization beneficiaries, CMS stated that it does not require sponsor call centers to provide information on the annual costs of their PDPs. However, while not necessarily required, agency officials had indicated that the information we sought from CSRs was within the scope of plan sponsor customer service efforts. In a discussion held before we conducted our March calls, CMS officials told us that the agency had not established any requirements regarding the specific types of information plan CSRs must be able to provide, but that it was reasonable to expect CSRs to give callers accurate information on the topics we included in our review. In addition, as noted in the draft report, some call centers were relatively successful in providing accurate and complete answers to questions 1 and 2, indicating that call center CSRs can handle such questions appropriately. One call center’s CSRs answered the question accurately and completely in 88 percent of the calls for the low-utilization beneficiary, and one call center’s CSRs responded correctly in 81 percent of the calls for the high-utilization beneficiary. In addition, we found that 7 of the 10 PDP sponsors had cost calculators on their Web sites that could have been used to answer these questions. CMS commented that, to be counted as providing a complete response to questions 1 and 2 on PDP comparisons, we expected CSRs to recommend a specific plan to the caller, a practice that often constitutes “steering,” which is prohibited under Medicare marketing guidance. As noted in the draft report, our callers identified themselves as family members wishing to assist beneficiaries in choosing a drug plan. Providing assistance to beneficiaries—which is encouraged by CMS—generally consists of learning the characteristics of various PDPs and assessing their relative merits given the potential enrollee’s needs. This is clearly allowed in CMS’s Marketing Guidelines, which distinguish between assistance based on objective information and steering to a drug plan for financial gain. CMS also took issue with how we counted a specific CSR response to questions 1 and 2. The agency incorrectly claimed that a CSR’s referral to 1-800-MEDICARE was categorized as an incomplete response. As noted in the draft report, we categorized responses as incomplete if the CSR accurately named the lowest annual cost plan, but either inaccurately calculated or could not provide the annual cost. If the CSR did not answer the question and instead referred the caller to 1-800-MEDICARE for information on PDPs, we classified the response as “no answer provided.” CMS stated that the wording of question 3 on the low-income subsidy was inaccurate and therefore misleading. This question specifies that the beneficiary automatically qualifies for extra help because Medicaid pays part of her Medicare premiums. According to CMS, the wording of question 3 is incorrect because only Medicare pays the drug premium for low-income beneficiaries and Medicaid would fully (not partly) pay the Part B premium. However, CMS’s comment conflicts with the information we obtained from its Medicare.gov Web site in developing the wording and answer for this question. Using the Web-based PDP finder tool on this Web site, the user can select one of several options specifying why the beneficiary qualified for extra help. We selected the option specifying that the beneficiary automatically qualified for extra help because they receive “help from State paying Medicare premiums.” We agree that only Medicare, and not Medicaid, pays the Medicare Part D premium for low- income beneficiaries and Medicaid would fully (not partly) pay the Part B premium. Therefore, for such a beneficiary, Medicaid would pay part of the beneficiary’s Medicare premiums. CMS also stated that, for certain questions, many reasonable answers were not counted as correct. The agency cited our question regarding a beneficiary’s options should he or she be prescribed a nonformulary drug, and asserted that our criteria for a correct response—switching to a covered drug or asking for an exception—was too limited. The agency stated that other reasonable answers should have been counted as correct because we conducted our calls at a time when all plans covered all Part D drugs. We obtained the answer to this question from a script that CMS approved for use by CSRs operating its 1-800-MEDICARE help line. In addition to the two options we used as criteria for an accurate and complete answer, the script mentioned that PDPs are required to provide beneficiaries with temporary transitional coverage (generally for 30 days after enrollment) of drugs not on the PDP’s formulary. However, according to CMS, the purpose of this temporary coverage is to provide beneficiaries with sufficient time to switch to another drug or to request an exception to the formulary. Therefore, in specifying our criteria for an accurate and complete answer, we chose to include only the two options that CMS sees as longer-term solutions for the beneficiary. CMS stated that we did not examine certain features of the support services that plan sponsors’ call centers are required to provide, such as hours of operation, wait times, disconnection rates, and language services. It also noted requirements that plans report a range of performance measures, such as beneficiary complaint rates and timeliness of exceptions and appeals decisions. As noted in the draft report, the scope of our review was limited to the accuracy and completeness of information disseminated to the public by PDP sponsors’ call centers—a feature of plan customer service for which CMS has established no performance requirements. Finally, CMS believes that, as written, our study provides little practical guidance of value in improving the drug benefit and that our conclusion— that callers may benefit from consulting other information sources available on Medicare Part D when seeking to understand and compare PDP options—is obvious. In quoting our conclusion, CMS omitted the key part of the paragraph preceding the quoted phrase where we state that “sponsor call centers’ poor performance on our five questions raises questions about whether the information they provide will lead beneficiaries to choose a PDP that costs them more than expected or has coverage that is different than expected. . . .” We continue to believe that plan sponsors should be accountable for the accuracy of their information and make maintaining effective call centers a priority. CMS also provided us with detailed, technical comments, which we incorporated where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to the Administrator of CMS, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (312) 220-7600 or aronovitzl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix II. In addition to the contact named above, Rosamond Katz, Assistant Director; Manuel Buentello; Jennifer DeYoung; and Joanna L. Hiatt made major contributions to this report. Other contributors include Lori D. Achman, Diana B. Blumenfeld, Gerardine Brennan, Laura Brogan, Lisa L. Fisher, M. Peter Juang, Martha R.W. Kelly, Ba Lin, and Michaela M. Monaghan. | The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) established a voluntary outpatient prescription drug benefit, known as Medicare Part D. Private sponsors have contracted with the Centers for Medicare & Medicaid Services (CMS) to provide this benefit and are offering over 1,400 stand-alone prescription drug plans (PDP). Depending on where they live, beneficiaries typically have a choice of 40 to 50 PDPs, which vary in cost and coverage. MMA required each PDP sponsor to staff a toll-free call center, which serves as a key source of the information that beneficiaries need to make informed choices among PDPs. GAO examined (1) whether PDP sponsors provide prompt, courteous, and helpful service to Medicare beneficiaries and others and (2) the extent to which PDP sponsor call centers provide accurate and complete information to Medicare beneficiaries and other callers. To address these objectives, we made 900 calls to 10 of the largest PDP sponsor call centers during March 2006, posing one of five questions about their Part D plans during each call. We tracked the amount of time it took to reach a customer service representative (CSR), the number of calls that did not reach a CSR, and the appropriateness and clarity of CSRs' language. We developed criteria for determining accurate and complete responses based on CMS information. Call center service was generally prompt and courteous, and many CSRs offered helpful suggestions and information. GAO reached a representative in less than 1 minute for 46 percent of the calls CSRs fielded and in less than 5 minutes for 96 percent of the calls fielded. While GAO did not reach CSRs for 4 percent of the calls it placed, mainly because of disconnections, GAO found that 98 percent of CSRs with whom GAO spoke were easy to understand, polite, and professional. In addition, many CSRs provided helpful suggestions related to GAO's questions, such as details about a mail-order option to obtain drugs or lower-cost drugs. However, CSRs at 10 of the largest PDP sponsor call centers did not consistently provide accurate and complete responses to GAO's five questions, which GAO developed using information from CMS and other sources. GAO obtained accurate and complete responses to about one-third of the 864 calls for which GAO reached a CSR. The overall accuracy and completeness rate for each call center ranged from 20 to 60 percent. CSRs were unable to answer 15 percent of the questions posed, primarily those related to plan costs. Furthermore, CSRs within the same call center sometimes provided inconsistent answers. For example, in response to questions about PDP cost comparisons for specified sets of drugs, CSRs at 3 call centers told GAO that it was against the sponsors' policies to identify any of their plans as lowest cost. However, other CSRs at each of these call centers did not cite this policy and did identify a plan as lowest cost. In commenting on a draft of this report, CMS criticized the analysis as based on inaccurate, incomplete, and subjective methods that limit the report's relevance and validity. GAO maintains that its methods are sound and its findings are accurate. CMS officials told GAO at a May 2006 meeting that CSRs should have been able to accurately answer the questions GAO posed. |
Of the 36 countries with alternative filing systems, only Denmark and Sweden have tax agency reconciliation systems. The remaining 34 countries have final withholding filing systems. There are several potential barriers to adopting a final withholding system in the United States. Specifically, before a final withholding system could be instituted in the United States, the law would have to be changed to require employers to calculate employees’ tax liability and adjust employees’ last paychecks so that total yearly withholdings would equal employees’ tax liability. Also, unlike many countries with final withholding systems, the United States tax system does not exempt or limit taxes on interest and dividend income, nor does it require married couples to file separately. (Appendix I lists these 34 countries and describes how final withholding systems generally operate in those countries.) A tax agency reconciliation system would be easier to implement in the United States because it would not require statutory changes. In general, under a tax agency reconciliation type system, the tax agency is to calculate the tax liability on the basis of information returns or income reports received from payers, such as wage reports prepared by employers and interest income reports prepared by financial institutions. The tax agency then is to send the taxpayer a printed tax return or reconciliation statement. Upon receipt of the tax return, the taxpayer is to review and correct its contents, add information if any is missing, and return the completed return to the tax agency. After the tax agency receives and reviews the tax return, it is to send the taxpayer either a refund or tax bill. In 1994, Denmark prepared tax returns for 85 percent of its 4.5 million taxpayers, and Sweden prepared returns for 74 percent of its 7.3 million taxpayers. In 1987, in response to a provision in the Tax Reform Act of 1986, IRS studied the feasibility of a tax agency reconciliation filing system. IRS estimated that 55 million individual taxpayers would be eligible to use such an alternative filing system. This estimate included all taxpayers who were filing Form 1040EZ, most taxpayers who were filing Form 1040A, and a few who were filing Form 1040. Under the system IRS studied, employers and payers would be required to submit wage statements (Form W-2) and information returns (Form 1099) by the end of January, instead of the end of February as is now required. IRS would spend the next 4 to 6 weeks processing these documents. Some taxpayers would receive their returns in early March, but most taxpayers would start receiving theirs in early April. IRS concluded that the tax agency reconciliation filing system it studied was not feasible primarily because it would be very difficult to receive, verify, and post over 900 million wage and information documents in time to generate tax returns. IRS determined that checking all of the documents for accuracy and correcting them in time to generate returns was beyond its capabilities. Our objectives in describing the potential benefits and impediments of a voluntary tax agency reconciliation type filing system were to (1) estimate how many taxpayers would not have to prepare returns under this kind of system, (2) identify the operational characteristics such a system might have, (3) identify the pros and cons of such a system to taxpayers and IRS, and (4) identify any major impediments to or costs in establishing such a system under the current federal tax laws. It is important to note that we focused our work on the impact that an alternative filing system might have for individual taxpayers and IRS. We recognize that establishing such a system would affect other stakeholders such as (1) some employers and financial institutions that would probably incur additional costs to modify existing systems or establish new systems for information reporting and (2) tax preparers who might lose revenue from reduced demand for their services or might have to change the services they offer to meet any new type of demand from taxpayers. However, we did not attempt to quantify the benefits and costs to them of implementing such a system. Rather, we viewed the approach to considering the feasibility and desirability of devising an alternative filing system as encompassing two levels of efforts: first, an initial look using readily available data to determine whether additional research by IRS might be warranted and second, a more detailed effort that might involve gathering and analyzing additional data, including specific impacts on third party stakeholders. While we restricted our effort to the first of these two levels of effort, we did make a presentation, on how a tax agency reconciliation system would work, to IRS’ Information Reporting Program Advisory Committee, which is composed of representatives from third-party stakeholders, such as financial institutions, employers, and other payers. We also discussed a tax agency reconciliation system with officials of the American Institute of Certified Public Accountants and H&R Block, Inc., which is a national tax return preparation company. To estimate the number of taxpayers that would be eligible for a tax agency reconciliation system, we used IRS’ tax year 1992 Statistics of Income (SOI) file to extract records that met the income and deduction criteria we established for a hypothetical tax agency reconciliation system. These criteria were that taxpayers would only have income that was reported on information returns and that taxpayers would not itemize deductions. To identify the operational characteristics a tax agency reconciliation type system might have, we reviewed IRS’ 1987 return-free filing study. We also interviewed officials at the Department of the Treasury and IRS to determine what administrative changes would have to be made to the federal tax system to have a voluntary tax agency reconciliation type system. We also reviewed the literature on how the tax agency reconciliation systems worked in Denmark and Sweden. We compared information on the operations of tax systems in these countries with the operations of our federal income tax system to determine how they differed. We also interviewed representatives from local consulates to Denmark’s and Sweden’s foreign embassies about how their tax agency reconciliation filing systems worked. To determine potential pros and cons of a tax agency reconciliation type system for taxpayers, we reviewed the return preparation tasks IRS published in its 1994 individual income tax booklets. We compared these tasks with the type of tasks taxpayers would have under a tax agency reconciliation type system. We also reviewed literature on costs to taxpayers to have paid preparers complete their tax returns. To determine any potential benefits and costs to IRS under a tax agency reconciliation system, we applied IRS’ methodology for estimating its cost to process tax returns to the filing population eligible for tax agency reconciliation to estimate the cost to process documents under such a system. We also used IRS’ costing methodology to estimate the cost to process tax returns under the current system for the population eligible for tax agency reconciliation. To identify the impediments to establishing a tax agency reconciliation type system, we reviewed IRS’ procedures for processing information returns and obtained IRS officials’ views on potential impediments. To determine the effect of a tax agency reconciliation system on taxpayers’ ability to file state income tax returns, we discussed a hypothetical system with representatives of the Federation of Tax Administrators and state tax officials who attended the federation’s national conference held in Cleveland, Ohio, in June 1995. We also reviewed an IRS document on taxpayers’ opinion of a tax agency reconciliation system and discussed the conclusions with IRS officials. Our work was done from November 1994 through April 1996 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Commissioner of Internal Revenue, and on August 19, 1996, we received written comments from the Deputy Commissioner. His comments are discussed on pages 21 to 22 and a copy of the comments appear in appendix III. We also requested and received comments on a draft of this report from the American Institute of Certified Public Accountants and IRS’ Information Reporting Program Advisory Committee. These comments are discussed on pages 22 to 24. To determine how many taxpayers potentially could be covered by a tax agency reconciliation system, we used income and deduction criteria that would maximize the number of taxpayers that could be included in the system, while minimizing administrative changes. Taxpayers who met these criteria included those who had taxable income from wages, interest, dividends, pensions, and did not itemize deductions and, instead, took the standard deduction; and did not take any tax credits except the earned income tax credit. We estimated that about 51 million taxpayers who filed tax year 1992 returns, which was the latest year that data were available to make our estimate, met these conditions. The 51 million taxpayers accounted for about 45 percent of the 113.6 million taxpayers who filed tax year 1992 returns and had about 14 percent of the total individual income tax liability reported that year. Also, the majority of these taxpayers, about 28.7 million or 56 percent, had taxable income from wages only. We limited eligibility for the system to taxpayers with taxable income from wages, interest, dividends, pensions, and unemployment compensation because IRS receives information returns on these income types, which could be used to calculate taxpayers’ tax liabilities. Taxpayers with other types of income, such as rents, royalties, capital gains, and self-employment (i.e., nonemployee compensation), were excluded from eligibility because the information returns with these types of income showed gross income amounts and needed to be reduced by taxpayers’ costs to determine the net taxable amounts. IRS does not receive information returns on taxpayers’ costs to produce income. Similarly, taxpayers who itemized deductions were excluded from the eligible population because some itemized deductions, such as medical expenses and charitable contributions, are not reported on information returns. Regarding tax credits, the earned income tax credit was the only individual tax credit that could be calculated with the income and entity information that would be available to IRS. The following sections describe one way a tax agency reconciliation system might work. This version of a tax agency reconciliation system would consist of four operational characteristics. Taxpayer information needed by IRS: To calculate taxpayer tax liability and the earned income tax credit, if applicable, some of the information IRS would need on each taxpayer would include the following: name and address, Social Security Number (SSN), name and SSN of spouse, name and SSN of dependents and qualifying children, relationship of dependents and qualifying children to taxpayer, and number of months dependents and qualifying children lived with taxpayer during the year. Taxpayers are currently required to report these data on their tax returns. While IRS records would contain taxpayer entity information data from prior year returns, IRS would need to develop a process or form to collect this information for the current year or to confirm or change its prior year’s records. IRS could use the taxpayer identification information part of the tax return package it sends to taxpayers at the beginning of the tax year to confirm the necessary taxpayer information. Taxpayers who volunteered to be covered by tax agency reconciliation filing could send the necessary information to IRS at the start of the tax filing season, perhaps on a form developed for that purpose. As an alternative to having all taxpayers mail a form to IRS, the system could be developed so that only taxpayers whose prior year’s information was in error or needed updating would make corrections and send them to IRS. Other taxpayers who volunteered to be covered by a tax agency reconciliation system could confirm the information to IRS via telephone. IRS preparation of tax return: IRS would need to process the taxpayer information and enter the data onto the taxpayer’s master file record. IRS would associate all the information return data it received from employers and other payers with the taxpayer information. Using these data, IRS’ computers would calculate the taxpayer’s tax liability and, if eligible, the earned income tax credit. A computer-generated tax return and a list of information returns used to create the tax return would be produced and sent to the taxpayer for verification. IRS could either send the refund to the taxpayer at the same time the return is sent or it could wait until the taxpayer verified the return information.Factors for consideration in deciding when to send the refund include the risk that the taxpayer would accept a refund without verifying that the return information was accurate and the length of time it would take IRS to send the refund after the taxpayer verified the return information. IRS could develop a test to determine whether the risk involved in sending the refund with the return would be acceptable. The tax return would serve as a tax bill if the taxpayer’s income tax withholdings did not cover the taxpayer’s tax liability. About 43.3 million or 85 percent of the 51 million taxpayers who met the criteria for eligibility for a tax agency reconciliation system either received refunds or owed no taxes in 1992. The remaining 7.7 million taxpayers owed an average of $253 in additional taxes. To facilitate taxpayer communication by telephone, IRS could also assign personal identification numbers to each taxpayer that would serve as their electronic signature, which is essentially what IRS currently does under its TeleFile program. Taxpayer review of tax return: When taxpayers received the computer-generated tax return, they would need to compare the information on the form to their records (e.g., information returns). Taxpayers who agreed with the return information could notify IRS of their acceptance, possibly by telephone using, their personal identification number as their electronic signature. Taxpayers with changes could either make corrections to the tax return, sign the return, and send it back to IRS or make corrections via telephone, using their personal identification number. Using IRS data, we estimated that about 275,000 of the possible 51 million computer-generated tax returns would be likely to be questioned by taxpayers because of erroneous information return data. (See app. II for information on how we developed this estimate.) IRS corrections to tax returns: When IRS receives the results of a taxpayer’s review of the tax return data (either via the return itself or by telephone), it would process any corrections and, if necessary because of the corrections, send the taxpayer either a revised refund or a revised tax bill. Under a tax agency reconciliation system, taxpayers could save millions of hours in tax return preparation time and millions of dollars in paid preparer fees. A tax agency reconciliation system would also benefit IRS by helping to achieve its business vision goals and reducing its returns processing and compliance costs. However, while tax preparation burden could be reduced for individual taxpayers, others now involved in the process such as tax preparation firms, financial institutions, and other payers could be negatively affected. We did not identify a readily available basis for estimating the time or assistance required by taxpayers to submit taxpayer information to IRS or review the IRS-generated returns. On the basis of IRS data, we estimated that the 51 million taxpayers who could be covered by a tax agency reconciliation system spend about 262 million hours collectively on tax return preparation tasks. Breaking the data into task groupings, we estimated that these taxpayers could reduce time spent on certain return preparation tasks by as much as 155 million hours. For example, taxpayers would be relieved of the requirement of preparing tax returns and the related burden of learning how to do so. However, taxpayers would still have to complete the taxpayer information form and send it to IRS. Similarly, taxpayers would still need to keep records on their income to be able to verify that IRS accurately computed their income tax liability. Also, taxpayers would have to notify IRS of their agreement or disagreement with IRS’ computer-generated tax returns. Appendix II provides a detailed explanation of how we arrived at our estimates. Under a tax agency reconciliation system, taxpayers who use paid preparers could be free of the task of finding and paying for their services. We estimated that 16.6 million or almost one-third of the 51 million taxpayers used paid preparers and could save millions of dollars in paid preparer fees. The tax agency reconciliation system could help IRS achieve one of its long-term business vision goals, which is to reduce the amount of paper documents it has to process. Under this system, IRS would have to process the taxpayer information documents it receives, but this would require less paper than some tax returns. Currently, IRS envisions reducing the amount of paper documents it processes, by increasing electronic filing of tax returns. Some taxpayers can file electronically by telephone through IRS’ TeleFile program, but most electronic filing is done through a tax return preparer or an electronic return transmitter. Taxpayers generally pay from $15 to $40 for such services. Electronic filing has several benefits for taxpayers, including ensuring that (1) the returns are mathematically accurate and (2) information on the returns has been accurately posted to the taxpayers’ accounts in IRS’ records. These same benefits would be available to taxpayers under a tax agency reconciliation system. And, taxpayers would get the benefits of electronic filing without incurring the costs, since under a tax agency reconciliation system IRS would be electronically filing returns for taxpayers. Another benefit to IRS of a tax agency reconciliation system is that the combined cost to process taxpayer information forms and computer-generated tax returns should be less than the cost to process taxpayer-submitted tax returns. Using IRS data, we estimated that it would cost IRS about $182.3 million to process tax returns under the current system for the 51 million taxpayers who could be covered by a tax agency reconciliation system. In comparison, we estimated that the cost associated with processing taxpayer information forms, generating tax returns, processing tax payments, and handling taxpayer inquiries under an alternative system could be about $160 million. This would result in an approximate savings of $22 million for IRS. Appendix II gives details on how we made our cost estimates. IRS would also benefit from a tax agency reconciliation system because its compliance costs could be reduced. Savings would result from not subjecting the 51 million taxpayers to IRS’ underreporter program. This computerized compliance program matches income shown on information returns with income that taxpayers report on their tax returns to determine whether taxpayers reported all their income. When discrepancies are found, IRS contacts taxpayers to resolve the issue and assess additional taxes, if required. Under a tax agency reconciliation system, document-matching would not be necessary because IRS would prepare returns and assess taxes on the basis of the information returns and taxpayer-supplied entity data it has in its computers. As a result, compliance resources that would have been spent on working potential underreporter cases that may have resulted from taxpayer-prepared returns would be saved. We estimate that about 858,000 of the 51 million tax agency reconciliation eligible taxpayers would have been pursued by IRS under its underreporter program at a cost of $17.61 per taxpayer. Therefore, under a tax agency reconciliation system, the underreporter program cost savings would be about $15 million. Thus, the net processing and compliance cost for such a tax agency reconciliation system would be about $145 million, as compared with about $182 million under the current system for an approximate savings of $37 million. Appendix II shows how we developed these estimates. As noted above, about one-third of the taxpayers who met the criteria we identified for eligibility to file under a tax agency reconciliation system used tax preparers to file their tax returns in 1992. A tax agency reconciliation filing system could eliminate the need for such assistance, with the consequence that tax preparers could lose a substantial part of their business. On the other hand, since many of the filers who are eligible for such a system file relatively simple returns, their need for assistance to do so may suggest that under a tax agency reconciliation system, they might also need assistance to send IRS the necessary taxpayer information or review the IRS-generated tax return. We had no readily available basis to estimate the extent to which this business substitution might occur. Nonetheless, the impact of a tax agency reconciliation system on tax preparers would need to be considered in deciding whether to adopt such a system. Although IRS’ business vision already contemplates improving the filing and processing of information returns through increased use of electronic filing, the introduction of a tax agency reconciliation system could put additional burdens on employers, some financial institutions, or others who would be obligated to file information returns more quickly or in electronic form. We had no readily available basis to estimate the costs or other difficulties with such an additional reporting burden. Nonetheless, as with the tax preparer industry, the impact on these third-party participants in the tax system would need to be considered during deliberations on the feasibility of an alternative filing system. A major impediment to a tax agency reconciliation type system is that IRS’ present time frame for processing information returns is too long for tax returns to be verified by taxpayers by April 15, which is the filing date for federal returns. However, IRS has several initiatives that could eventually allow it to process information returns sooner. Further, taxpayers in 11 states would need information from their federal returns before April 15 so that they could file their state tax returns on time. However, even if IRS could produce tax returns prior to April 15, some taxpayers may be reluctant to participate in this voluntary program because they distrust IRS to accurately prepare their returns or because they would not be able to get access to refunds early in the tax filing system. A major stumbling block to a tax agency reconciliation system is the amount of time it presently takes IRS to process information returns. Currently, many information returns, including form W-2s, are not processed until August or 6 months after the due date for filing them. Currently, IRS receives over 1 billion information returns each year. About 200 million are form W-2s that employers are required to submit to SSA by the end of February. According to an SSA official, about 35 percent of the form W-2s are submitted on paper and the rest are submitted on magnetic media. After SSA processes the form W-2s, they are sent weekly beginning in March to IRS on computer tapes. The remaining 800 million information returns are primarily the Form 1099 series of information returns, such as those reporting interest income, which are due to taxpayers at the end of January and to IRS at the end of February. Most of the 800 million information returns are sent directly to IRS’ Martinsburg Computing Center on magnetic media. Payers send their paper Form 1099 information returns to IRS service centers for processing. About 7 percent of the information returns are paper. Although information returns are due by the end of February, certain factors tend to lengthen the time it takes to process them. These factors include the time it takes to process paper information returns, payer extensions for filing returns, and payee corrections to information returns. However, according to IRS and SSA officials, the most significant factor for processing delays is that about 10 percent of the information returns must be sent back to payers because they cannot be processed by either IRS or SSA. Various reasons contribute to this problem, such as incomplete data and incorrect data format. When information returns are sent back to a payer for replacement, IRS allows a minimum of 45 days to correct and send back the data. According to IRS data, over 80 percent of Form 1099 information returns and about 65 percent of the forms W-2 are processed and validated by the end of June. It is not until the end of July that IRS has usually processed 95 percent of the information returns and not until the end of August that 93 percent of the forms W-2 have usually been processed. Given the current April 15 filing due date, the key to making a workable tax agency reconciliation system would be receiving and processing information returns sooner. As part of its long-term business vision, IRS plans to have the capability to process information returns sooner so it can match information return data to tax returns to identify unreported income sooner. Its business vision does not include using information returns for a tax agency reconciliation system, but it does include more up-front matching of information returns to tax returns before refunds are issued. IRS currently has a couple of initiatives under way that should speed up information returns processing. These initiatives are not specifically directed to having all information returns processed before April 15. One initiative is the Service Center Recognition/Image Processing System, which is a multimillion dollar system designed to process Form 1040EZ income tax returns and information returns by electronically scanning the document. While only about 7 percent of the information returns are filed on paper, these paper information returns are usually not processed until sometime after April 15. This system, if proven successful, should allow IRS to process paper information returns soon after the February 28 filing date. Also, IRS and SSA are involved in a multiagency project called the Simplified Tax and Wage Reporting System (STAWRS), which deals, in part, with the processing of forms W-2. STAWRS has projects under way dealing with the electronic transmission of Form W-2 data and with employer validation of employees’ SSNs via telephone and computer. The success of these projects could help improve the accuracy of Form W-2 data and speed up Form W-2 processing. However, STAWRS officials did not have estimates on when these projects would become operational. As part of its business vision, IRS has an electronic filing strategy that focuses primarily on getting taxpayers to file their tax returns either electronically or by touch-tone telephone. A smaller component of this strategy is an initiative to target large volume information returns such as forms 1099 on interest and dividends for electronic filing beginning in fiscal year 1997. IRS already receives some information returns electronically. For 1995, IRS estimated that it received about 42.8 million electronically filed nonwage information returns and estimated that 48.9 million will be filed in 1998. The estimated volume of electronically filed information returns is small, compared with the 800 million nonwage information returns that are filed annually. IRS officials said these estimates could change depending on the number of payers who are willing to file electronically. Electronic filing of information returns could be one way to get information returns processed in time to be used for a tax agency reconciliation system. In its 1987 study on return-free filing, IRS concluded that such a system was not feasible when the study was made. IRS found that the amount of time it took to process information returns was the primary administrative obstacle to return-free filing because of the amount of time it took to correct information returns data. However, the study noted that long-term technological improvements in IRS’ tax processing could result in such a system being feasible in the future. The study further indicated that IRS would consider a tax agency reconciliation filing concept in connection with its ongoing tax system redesign efforts. Since its 1987 study, IRS has not reexamined the operational characteristics of a tax agency reconciliation filing system as a potential alternative for filing tax returns, even though technological improvements, such as electronic filing, may make it possible to speed up information returns processing. One of the benefits of electronic filing is that returns, whether tax returns or information returns, can be processed faster and with fewer errors because IRS computer programs are designed to detect errors before its computers will accept electronically filed returns. However, it is unknown how many payers would file information returns electronically and whether electronically filed information returns would shorten the time frame needed to make a tax agency reconciliation system workable. Federal income tax return information is not used exclusively for federal tax purposes. A majority of states require their taxpayers to use information from their federal tax returns to calculate state taxes. Table 1 shows the tax base for the 50 states and the District of Columbia and the estimated number of taxpayers potentially eligible for tax agency reconciliation filing in each state. As shown in table 1, 36 states and the District of Columbia have income tax systems that are linked to a federal tax base. These states typically require taxpayers to use information from their federal income tax return as a starting point for state tax computation. Taxpayers in such states are usually required to report either their federal adjusted gross income, federal taxable income, or federal tax liability on their state tax returns. Under a tax agency reconciliation system, 45.4 million or 89 percent of the potential 51 million filers would not need their federal return to calculate state income taxes. For example, the 29.6 million taxpayers that reside in states that use federal adjusted gross income as the tax base could calculate their state income tax from information returns they receive from employers and other payers because the total income shown on the information returns would equal federal adjusted gross income. Also, 7.3 million taxpayers reside in states that are not dependent on the federal tax return for state income tax calculations, and 8.5 million taxpayers reside in states that do not assess a state income tax. The 5.6 million potential return-free filers who reside in the 11 states that use either federal taxable income or federal tax liability as the state income tax base might be less likely to volunteer to participate in the system if they did not receive their federal tax returns in time to meet state filing requirements, which is typically April 15. Unless IRS is able to get returns to taxpayers before this date, these taxpayers would have to compute their federal taxable income or tax liability or file their state tax returns late. If they filed their state returns late, they could be subject to state penalties for filing late returns. A taxpayer focus group IRS held in 1993 did not support a tax agency reconciliation system. According to IRS officials, the participants, who expressed varying degrees of mistrust toward IRS, doubted that IRS had complete and correct data with which to prepare their returns. Many participants also felt that their tax liability would increase because IRS would try to assess more taxes than they might otherwise owe. Also, the participants expressed doubt that procedural remedies would be available for obtaining adjustments if they disagreed with the return prepared by IRS. Representatives of the paid preparer community also told us that many of their clients do not trust IRS to prepare accurate returns and would continue to rely on paid preparers for return preparation services under a tax agency reconciliation system. Taxpayers may also be reluctant to participate in a tax agency reconciliation system if IRS could not issue refunds as early as it does when taxpayers file their returns. Some taxpayer currently receive refunds by late January. According to IRS data, about 57 million or 72 percent of the 93 million taxpayers who received refunds in 1995 were issued them by the end of April. Under a tax agency reconciliation system, IRS may not be able to issue refunds this early. There may be no easy way to get taxpayers to change their attitude about trusting IRS to accurately assess their taxes. However, taxpayers who volunteer to participate in a tax agency reconciliation system would have copies of their information returns to verify that IRS used the correct income to calculate their taxes. Also, over 2 million taxpayers participated in IRS’ tax year 1995 TeleFile program in which they relied on IRS to accurately calculate their taxes. On the other hand, one of IRS’ selling points for using TeleFile was that taxpayers would receive their refunds sooner than if they filed paper returns. According to IRS, about 23 million taxpayers who filed tax year 1995 forms 1040EZ were eligible to participate in the TeleFile program in 1996, which was the first year the program was available nationally. As with other alternative filing options that IRS has introduced, such as electronic filing and TeleFile, getting taxpayers to volunteer to participate in a tax agency reconciliation system may depend upon how well the system works. If IRS could produce timely and accurate tax returns, taxpayer acceptance could be enhanced. As many as 51 million taxpayers, primarily wage earners, would not have to prepare tax returns under a tax agency reconciliation type system. Instead, IRS could prepare their returns for them on the basis of taxpayer supplied information, such as filing status and dependents, which along with information returns would be used to produce tax returns. We estimated that under a tax agency reconciliation system, taxpayers could save up to 155 million hours collectively on tax return preparation tasks and millions of dollars in tax return preparer fees. Such a system would also benefit IRS by reducing its returns processing and compliance costs. An unknown, however, is the extent to which taxpayers would voluntarily participate should such a system be adopted in the United States. Key considerations in this regard would be taxpayers’ perceptions of how efficiently, accurately, and fairly IRS could administer such a system. In addition to addressing these perceptions, IRS would need to design the system to minimize, to the extent possible, its own administrative problems and any additional burden imposed on other stakeholders, including employers and affected financial institutions in proving information returns electronically and more timely. A major impediment to establishing a tax agency reconciliation system is the length of time it takes IRS to process information returns. IRS’ current processing time frame would not allow it to provide taxpayers’ federal returns before April 15, which is the due date for filing tax returns. IRS’ 1987 study of return-free filing also noted this impediment as the major reason why a return-free system could not be implemented at that time. Technological advances, such as electronic filing, since that study may make it possible to have a tax agency reconciliation system in the future if more information returns were filed electronically. A tax agency reconciliation type filing system could make it easier and cheaper for taxpayers to fulfill their tax return filing responsibilities. Because of these factors and the technological advances made since IRS’ 1987 return-free filing study, we recommend that the Commissioner of Internal Revenue reexamine the feasibility and desirability of designing and implementing a tax agency reconciliation system. The reexamination should include a determination of methods to increase trust among taxpayers in IRS’ ability to administer such a system fairly and accurately. It should also assess the added burdens and costs that such a system would have on employers, affected financial institutions, and other stakeholders and develop ways of mitigating these burdens and costs. Because of the uncertainty about stakeholder receptivity to such a system, we also recommend that, if the reexamination results in IRS initially determining that an agency reconciliation system may be feasible and desirable, the Commissioner expand the reexamination to include a limited pilot test. Such a test would provide IRS with useful data for addressing stakeholder concerns and demonstrating its ability to administer such a system fairly and accurately. If all of the improvements necessary to fully implement a tax agency reconciliation system are not feasible in the short term, it may still be possible to test the concept in one or more states that have no income tax. We requested comments on a draft of this report from the Commissioner of Internal Revenue, the American Institute of Certified Public Accountants, and IRS’ Information Reporting Program Advisory Committee. In written comments on a draft of this report (see app. III), the IRS Deputy Commissioner was supportive of exploring any proposal that would reduce taxpayer burden and the volume of paper that needs to be processed. He indicated that a reexamination of the feasibility and desirability of designing and implementing a tax agency reconciliation system could become part of the Tax Settlement Reengineering project, which is an IRS initiative that uses a structured methodology to examine IRS’ business processes and ways to reduce paper processing. However, he noted that it would be a difficult task to do a complete cost-benefit analysis of a tax agency reconciliation because much of the information needed to do the analysis, such as private sector costs, may not be available. We recognize the difficulty of obtaining complete information on private sector costs, but we believe that IRS should be able to do an adequate evaluation of the feasibility of a tax agency reconciliation system because it has experience in evaluating other alternative filing systems such as TeleFile. The Deputy Commissioner noted that our report identified that the Service Center Recognition and Image Processing System and the Simplified Tax and Wage Reporting System as steps that IRS is taking to accelerate information returns processing. However, he stated that the draft report did not describe how close these activities are to accelerating information returns processing and therefore did not accurately portray the impact of this obstacle on a tax agency reconciliation system. During the course of our work, we asked for, but IRS did not provide estimates of when these initiatives are projected to be able to accelerate information returns reporting. The Deputy Commissioner’s letter is also silent in this respect. Finally, the Deputy Commissioner commented that the costs of a tax agency reconciliation system would be more than the costs to process electronically filed returns. We agree that the costs associated with the type of tax agency reconciliation system described in the report would be more than the costs to process the same number of returns electronically filed. However, we estimated that only 8 million of the 51 million taxpayers who could have been included in an alternative filing system in tax year 1992 filed electronically. If IRS could get most taxpayers to file electronically there may be no need for other alternative filing systems. In its written comments, the American Institute of Certified Public Accountants stated that it did not believe that IRS has the ability to implement a tax agency reconciliation system in the foreseeable future because of (1) the current uncertainty of IRS budget and staffing levels, (2) the inability of IRS to process and payers to file complete and accurate information returns on a timely basis, (3) the unknown effect on voluntary compliance that could occur if taxpayers fail to report income to IRS from sources not covered by information returns, and (4) the potential for IRS to make more errors processing taxpayers’ changes to IRS prepared returns under an alternative system than it currently makes processing tax returns. We believe that the issues raised by the Institute are valid. Budget constraints could hamper IRS’ ability to adopt a tax agency reconciliation system, but this should not prevent it from reexamining the concept. Similarly, if information returns processing cannot be accelerated in time to issue tax returns well before April 15, IRS would not be able to implement the type of tax agency reconciliation system described in the report unless federal and state laws are changed. We expect that information return filing and processing would be the key element in IRS’ examination of the system. In regards to whether taxpayers who volunteer for the system may not let IRS know when they have income not covered by information returns, we agree that the reporting of income not subject to information returns is potentially problematic. However, that problem exists today for paper returns and TeleFile, and we see no evidence it would be worse under a tax agency reconciliation system. As part of its examination of a tax agency reconciliation system, IRS may be able to get a handle on this type of noncompliance by evaluating a sample of TeleFile participants to determine whether they failed to report noninformation return income. As to whether the IRS would be more error-prone in processing tax changes under a tax agency reconciliation system than processing tax returns, we assume that IRS would have controls and procedures that would minimize errors under any system it develops. The Institute noted that a tax agency reconciliation system is too dependent on future IRS and SSA improvements and that it is too costly for IRS to undertake such a system until after the successful implementation of these future improvements. The Institute stated that paper processing may be more efficiently reduced by expanding opportunities for electronic filing and TeleFile and it concluded that IRS is correctly focusing its attention on these initiatives. We agree that IRS should continue to reduce taxpayer burden and paper processing through electronic filing and TeleFile. Our recommendation is to have IRS reexamine a tax agency reconciliation system to determine its feasibility as a supplement to these voluntary alternative filing systems. In its written comments, the Information Reporting Program Advisory Committee raised similar issues on IRS’ budget constraints, information returns filing and processing constraints, and potential decreases in voluntary taxpayer compliance levels that were raised by the American Institute of Certified Public Accountants. The Committee also stated that because many of the taxpayers who could be covered by a tax agency reconciliation system already use electronic filing and TeleFile, our estimates of cost savings are overstated. Our costs estimates were based on the combined costs to process the Form 1040 series of returns, which included both paper and electronically filed returns. The costs of TeleFile returns were not considered because we used tax year 1992 return data, and TeleFile was not implemented nationwide until 1996 for tax year 1995 returns. According to IRS, about 2.8 million taxpayers used TeleFile in 1996. The Committee also pointed out that the report did not address the costs to the paid preparer community of a tax agency reconciliation system. These costs dealt with the loss of federal revenues due to a decrease in taxable income reported by paid preparers and the costs of unemployment insurance and public assistance payments if the paid preparer industry experiences a significant loss of jobs. These types of costs are difficult to quantify. However, our report acknowledges that paid preparers may see a decrease in business if most taxpayers volunteer to be covered by a tax agency reconciliation system. The report also states that the effect on paid preparers and payers would need to be considered before such a system is adopted. We would expect IRS to include an analysis of these costs in any examination of a tax agency reconciliation system. We will send copies of this report to the Ranking Minority Member of this Subcommittee, the Commissioner of Internal Revenue, and other interested parties. We also will make copies available to others upon request. The major contributors to this report are listed in appendix IV. If you have any questions, please call me at (202) 512-9044. This appendix describes some of the major characteristics of final withholding type filing systems found in other countries. It also discusses how final withholding works in other countries and how a United States final withholding system might work. Although none of the countries with final withholding type filing systems are exactly alike, many share common characteristics such as (1) withholding on wages, (2) exempting or limiting taxes on interest and dividend income, and (3) requiring each spouse in a marriage who elects final withholding to be taxed as an individual instead of jointly. Under a final withholding system, as long as the proper amount of taxes have been withheld from wages and other income sources, no tax return has to be filed. However, there are differences among the countries using this system in when and how employers calculate the final withholding. For example, under the United Kingdom’s cumulative final withholding system, the employer must calculate the employee’s tax for a given pay period as well as the cumulative tax to date and then make any necessary adjustments. To determine how much tax employers need to withhold, the employee is to submit a form to the United Kingdom’s tax agency showing basic factors affecting the taxpayer’s tax status, such as the number of dependents and amount of allowances the taxpayer is entitled to claim. From this information, the tax agency is to develop a code and give it to the employer who then applies the code to tax tables to determine the amount of taxes to withhold. Tax agency auditors are to later verify whether employers withheld the correct amounts of taxes. More than three-fourths of the 22.4 million taxpayers eligible for final withholding in the United Kingdom did not have to file tax returns in 1994. Germany’s final withholding system is similar to the United Kingdom’s except that at the beginning of the calendar year the tax agency is to issue a certified wage card to each employee, listing basic factors affecting the employee’s tax status. The employee is to present the wage card to the employer for use in determining the amount of taxes to withhold. At the end of the year, the employer is to summarize the wage and withholding information, and if necessary, adjust the employee’s last paycheck so that the correct amount of taxes is withheld. Under Japan’s final withholding system, employees are to submit to the tax authorities, through their employer, statements providing exemption information, which include the names of dependents and other necessary particulars. Employers are to use the exemption information to withhold income tax on employment income according to tax tables based on variables, including the size of the income, number of exemptions, and periodic employment income. At the end of the year, the employer is to compare the taxes collected with the yearly tax amounts due and adjust the last paycheck so that the withholding equals the tax liability. Many countries with final withholding filing systems reduce the number of taxpayers required to file tax returns by either exempting, limiting, or taxing interest and dividend income at the source. Fourteen countries exempt all or a portion of interest, while another 14 countries tax interest with a flat amount at the source (e.g., tax on interest is withheld by the financial institution). Countries with final withholding filing systems may not allow married taxpayers to file joint returns. Twenty-two countries with a final withholding filing system specifically require married taxpayers to file separately. Other countries either require joint tax returns or give married couples that option. According to International Monetary Fund officials, administering a final withholding system with a “married filing joint” filing status would be very difficult if both spouses work. In that circumstance, each employer would need to take the income of the taxpayer’s spouse into consideration when calculating the final withholding amount. Under nearly all final withholding systems, including those of the United Kingdom, Germany, and Japan, taxpayers are required to file returns under some circumstances. For example, Chile requires employees to file tax returns if they have a second employer. In Luxembourg, where the total income of husband and wife is aggregated, a tax return must be prepared when both spouses work. Table I.1 describes the characteristics of the tax system for 34 countries with a final withholding filing system. Table I.1: Tax System Information for 34 Countries With Final Withholding Systems married filing separate if wife employed if earned income only from one job married filing separate if wife employed if earned income less than threshold flat tax at source, reconciled through tax return if income taxed at source or if only earned income if earned income and employed by one company flat tax at source, reconciled through tax return if earned income under a certain threshold interest above threshold withheld at source either joint or separate, but taxed on joint if sole income from one employer or less than threshold if income from one employer only if income from one source and less than threshold either joint or separate, but taxed on joint if one employer and earned income and other income less than threshold if earned income from one employer and no change in personal circumstances if earned income only, one employer, and less than threshold amount above threshold taxed through tax return amount above threshold taxed through tax return if employed by one employer through 12/31 (continued) Before a final withholding system could be instituted in the United States, the law would have to be changed to require employers to calculate employees’ tax liability and adjust employees’ last paychecks so that total yearly withholdings would equal employees’ tax liability. Also, unlike many countries with final withholding systems, the United States tax system does not exempt or limit taxes on interest and dividend income nor does it require married couples to file separately. These features are potential barriers to adopting a final withholding system in the United States. Without changes to address these features, taxpayers that could be covered by final withholding would be limited to those who only had wage income, which is the only income source generally subject to withholding. To determine how many taxpayers in the United States potentially could be covered by final withholding, we identified criteria for eligibility for a final withholding filing system. The final withholding criteria were identified to minimize tax law and administrative changes, limit burden on employers and other payers, and maximize the number of taxpayers that could be included in the system. To meet these criteria, taxpayers would not be able to itemize deductions and, instead, would take the standard deduction; taxpayers could have only wage income because other types of income are generally not subject to withholding; only taxpayers with one employer could be covered by final withholding because taxpayers with more than one employer would increase the burden on employers, who would have to know the income received from other employers to withhold the correct amount of taxes; married couples where both spouses had income were excluded because employers would have to know the income of both spouses to withhold the correct amount of taxes; and taxpayers could not claim any credits because employers would have to calculate the credit, which would impose additional recordkeeping burdens on them. We estimated that about 18.5 million taxpayers in 1992 met these conditions and could be covered under this type of final withholding system. These 18.5 million taxpayers represented about 16 percent of the taxpayers who filed returns, and they accounted for about 4 percent of the total reported individual tax liability. One way a final withholding system might work would be to have IRS develop a new withholding form for taxpayers who volunteer for final withholding. Employees would submit the withholding form to their employers for withholding and tax calculation purposes. The new form would have to contain such taxpayer entity data as name and address, name and SSN of spouse and dependents. A new withholding form would have to be developed because the current withholding form, Form W-4, Employee Withholding Allowance Certificate, does not contain complete filing status information or information on the employee’s spouse and dependents. Similar to current procedures, employers would use the information from the new withholding form to calculate the amount of income tax to be withheld each pay period. However, unlike current procedures, employers would enter Employer Identification Number on the new withholding form and send it to IRS. IRS would receive the new withholding form and enter the information on its records. IRS could use this information for compliance purposes, such as verifying the taxpayer’s and dependents’ SSNs. For the last pay period of the year, the employer would have to calculate the employee’s tax liability on the basis of the information provided on the employee withholding certificate. The employer would need to compare the tax liability with the amount withheld during the year and adjust the employee’s last pay check so that the total yearly withholdings equaled the tax liability. If during the year the withheld taxes exceeded the tax liability, the employee’s net pay on the last paycheck would be higher than usual. However, if not enough taxes were withheld during the year, the employee’s last paycheck would be smaller than usual. Final withholding would reduce the time taxpayers spend preparing returns and possibly eliminate the cost of paying tax return preparers. Thus, such a system would reduce taxpayer burden. However, it would have a negative impact on paid preparers. On the other hand, IRS’ returns processing costs would be reduced because it would not have to process tax returns. Final withholding would increase employers’ burden and costs in complying with tax laws and regulations. For example, in addition to knowing about current withholding requirements and forms, employers would need to become familiar with separate withholding and reporting requirements for employees who volunteer for final withholding and those who do not. Unlike under the current employment tax requirements, employers would need to ensure that they accurately track withheld taxes so that they could make a year-end adjustment to make the withheld taxes equal to the taxpayer’s tax liability. Otherwise, employers could be liable for payment penalties for incorrectly withholding taxes on employees. Employers and their representatives we talked to expressed concern over some final withholding aspects. Some employers believed that final withholding would require revising the entire payroll process to accommodate year-end adjustments. We were told by some employers that the current wage withholding system would need to be completely overhauled because it was not designed to calculate the final amount of taxes owed. Taxpayers would not have a federal tax return to use to file their state income tax returns. However, except for eligible taxpayers who reside in the eight states that use federal taxable income as their tax base, eligible taxpayers who are required to pay state taxes could calculate their state taxes from the income and withholding information contained on their withholding form. We estimated that about 2 million of the 18.5 million taxpayers who could be eligible for final withholding reside in the eight states where tax systems are linked to federal taxable income. For these taxpayers to meet state tax requirements, they would have to determine their federal taxable income. However, in the absence of federal tax returns, officials from several of these states told us they would consider developing supplemental instructions and worksheets in their tax booklets that would allow taxpayers to calculate federal taxable income so that taxpayers could calculate their state taxes. This appendix describes how we derived the population and cost estimates for the tax agency reconciliation type filing system. We used IRS’ 1992 Statistics of Income (SOI) data to estimate the number of taxpayers who would have met our eligibility criteria for the system and the amount of time they could save in tax return preparation tasks. We used IRS’ Document 6746, Cost Estimate Reference for Service Center Returns Processing for Fiscal Year 1994, to develop costs estimates. For a hypothetical tax agency reconciliation system, we made the following estimates: (1) number of taxpayers in tax year 1992 who could have been covered by the system, (2) fiscal year 1994 costs to process tax returns for this population, (3) costs to process taxpayer information forms, (4) costs to produce computer-generated tax returns, (5) costs to process remittances for taxpayers who would owe taxes, (6) costs to handle taxpayer inquiries about their computer-generated tax returns, and (7) potential compliance savings from the underreporter program. Taxpayers who could have been covered by this system included those who had income from wage, interest, dividend, pension, and unemployment and claimed the standard deduction. We estimated that 51 million taxpayers would have met these eligibility criteria for a tax agency reconciliation system. Table II.1 shows the number of taxpayers included in a tax agency reconciliation system by income type. Included in the 51 million taxpayers, but not shown in table II.1 because they are already included based on their type of income, are 9.9 million or 19 percent of the taxpayers who claimed earned income tax credit (EIC). Table II.2 shows the number of tax returns by the type of form filed and IRS’ combined paper and electronic filing cost to process each form. Table II.3 shows the estimated costs associated with the tax agency reconciliation system. The following sections describe how each of these estimates were done. Estimated processing costs for taxpayer information form: For our hypothetical tax agency reconciliation system, we assumed that IRS would need to develop a form to collect taxpayer information that would be needed to calculate a taxpayer’s tax liability. Some of the information IRS would need on each taxpayer would include, name and address, SSN, name and SSN of spouse, name and SSN of dependents and qualifying children, relationship of dependents and qualifying children to taxpayer, and number of months dependents and qualifying children lived with taxpayer during the year. To determine IRS’ costs to process a form with these taxpayer information items, we used the costing methodology in IRS’ Document 6746. Table II.4 shows the estimated cost to process 1,000 taxpayer information forms. Quality assurance ($12.34 X 2 hr.) Overhead ($416.34 X 75 percent) Employee benefits ($728.60 X 25.9 percent) Fresno Service Center staff estimated that the processing cost for 1,000 forms would be $391.66 and would take 43.5 hours of labor to process. We then used these processing costs and hours and Document 6746 quality assurance methodology to determine quality assurance costs. Using Document 6746 methodology, we multiplied the number of hours required for processing taxpayer information forms (43.5 hours) by 4.5 percent to determine the quality assurance time for 1,000 forms, which was about 2 hours (i.e., 43.5 hours x 0.045 = 1.958 or 2.0 hours). To compute the cost of the quality assurance time, we multiplied the 2 hours by the $12.34 hourly quality assurance rate used in Document 6746 and determined this cost to be $24.68 ($12.34 x 2.0 hours). To estimate overhead costs, we used the overhead percentage found in Document 6746, which was 75 percent of direct costs (i.e., processing cost of $391.66 plus quality assurance cost of $24.34 for a total of $416.34. We estimated overhead cost to be $312.26, (416.34 x .75), which when added to the combined processing and quality assurance costs was $728.60. To determine employee benefits, we used the 25.9 percent benefit rate found in Document 6746 and multiplied this rate by the direct and overhead costs (i.e., $728.60 x 0.259). The employee benefits ($188.71) was added to all other cost to give a total cost of $917.31 to process 1,000 forms. Thus, the cost per form would be about 92 cents. Computer-generated tax return: IRS essentially produces computer-generated tax returns now in its document-matching program. Under this program, IRS computer-matches income reported on information returns with income reported on tax returns to determine whether taxpayers reported all their income or failed to file required tax returns. In this matching process, the computer internally creates the equivalent of a tax return from the information return data. When the computer identifies nonfilers, IRS service centers send a series of computer-generated notices to potential nonfilers. Certain nonfiler cases not resolved during the notice process are assigned to the Substitute for Return Program. Under this program IRS uses information returns to prepare a tax return that “substitutes” for the return that the taxpayer should have filed voluntarily. IRS estimated that it costs about 65 cents to produce a substitute return and 32 cents to mail each one. On the basis of these data, we estimated it would cost about $49.5 million to produce and mail out 51 million computer-generated tax returns. Processing remittances: On the basis of the SOI data, we estimated about 7.7 million or 15 percent of the 51 million taxpayers would owe taxes when they received their computer-generated tax returns. The remaining 43.3 million either would receive a refund or owe no taxes. IRS Document 6746 data for fiscal year 1994 showed that it cost about 30 cents to process a remittance. Therefore, we estimated the cost to process 7.7 million remittances would be $2.3 million. Handling taxpayer adjustments to IRS prepared returns: On the basis of the results of IRS’ underreporter program, it is likely that some of the information returns that IRS receives will be in error and result in erroneous tax returns. For the tax year 1991 underreporter program, IRS created about 9.1 million potential underreporter cases for the 114.7 million returns filed that year. About 112 million of the returns had at least 1 of the 5 types of income included in our tax agency reconciliation model (i.e., wages, interest, dividends, pensions, and unemployment compensation). IRS found that about 2.3 million or 2 percent of the 112 million taxpayers potentially underreported 1 or more of the 5 tax agency reconciliation system income types. To determine how many of the 51 million taxpayers eligible for tax agency reconciliation filing would be potential underreporters, we assumed that the underreporter caseload was spread out evenly among all 112 million taxpayers who reported 1 of the 5 tax agency reconciliation system income types. Therefore, we estimated that 46 percent (i.e., 51 million divided by 112 million) or 1.1 million of the 2.3 million (2.3 million x 46 percent) potential underreporter cases could be eligible for tax agency reconciliation filing. IRS worked about 1.8 million of the 2.3 million potential underreporter cases. IRS found when it investigated the 1.8 million cases that about 900,000 or 50 percent resulted in no change to the taxpayers’ tax liabilities because taxpayers had reported either the income on their returns, which was not detected in the computer-match, or the information returns were in error. Using the 50 percent no-change rate, we estimated IRS would have found that about 550,000 of the 1.1 million tax agency reconciliation potential underreporters would not have underreported their income. IRS does not maintain data on the number of erroneous underreporter cases that are created because of erroneous information returns. However, IRS Fresno Service Center officials estimated for us that between 5 and 10 percent of the no-change underreporter cases worked were due to erroneous information returns. To be conservative, we assumed that 50 percent of the no-change underreporter cases would not be correct because of erroneous information returns. We applied this 50 percent erroneous information return rate to the 550,000 no-change underreporter cases and estimated that IRS would create about 275,000 flawed tax returns because of erroneous information returns. To be conservative, we assumed that taxpayers who received these flawed returns would correspond with IRS to resolve the erroneous conditions rather than resolve them by telephone. IRS’ Document 6746 data showed that it costs about $4.93 to handle a piece of correspondence from an individual taxpayer. Therefore, on the basis of this data, we estimated it would cost $1.4 million to handle the 275,000 taxpayer inquiries. Table II.5 shows the calculations we made to estimate the cost of handling taxpayer inquires to process adjustments to IRS prepared tax returns. Data used to calculate underreporter cases Number of taxpayers with wage, interest, dividend, pension, and unemployment income in tax year 1992. Number of taxpayers eligible for tax agency reconciliation filing system. Percent of taxpayers eligible for tax agency reconciliation filing to total taxpayers. (51 million/112 million) Number of taxpers who IRS identified as potential underreporters of wage, interest, dividend, pension, and unemployment income in tax year 1991. Number of taxpayers eligible for tax agency reconciliation filing who were identified as potentially underreporting their income in 1991. (.46 X 2.3 million) No-change rate for tax year 1991 underreporter cases with wage, interest, dividend, pension, and unemployment compensation that were worked by IRS. Number of tax agency reconciliation underreporter cases with no change. (1.1 million x 50 percent) Percent of no-change cases that were due to erroneous information returns. Number of tax agency reconciliation system no-change cases that were due to erroneous information returns. (550,000 x 50 percent) Average cost to handle a piece of taxpayer correspondence. Cost to process taxpayer inquiries for adjustments to IRS prepared returns that were due to erroneous information returns. ($4.93 x 275,000) Handling telephone calls from taxpayers accepting IRS prepared returns: We assumed that the estimated 50,725,000 taxpayers who would agreed with their IRS prepared tax returns would telephone IRS with their acceptance. IRS estimated that for fiscal year 1994, it cost about $1.18 for each taxpayer service call. Using these data, we estimated that the cost to IRS to handle 50,725,000 telephone calls would be about $59.9 million. Savings from the Underreporter Program: Since the 51 million computer-generated returns are created from information returns, the 51 million taxpayers covered by return-free filing would not be subject to the underreporter program. Therefore, IRS would not incur underreporter costs associated with investigating taxpayers who are part of the tax agency reconciliation system. To determine the underreporter costs savings that could result under a tax agency reconciliation system, we used the results of IRS’ tax year 1991 underreporter program. As discussed above, IRS worked about 1.8 million or 78 percent of the 2.3 million potential underreporter cases that have the 5 types of tax agency reconciliation filing income (i.e., wages, interest, dividends, pensions, and unemployment compensation). We also assumed that since the 51 million taxpayers eligible for tax agency reconciliation filing represented 46 percent of the taxpayers with the 5 income types that the same percentage would apply to the potential underreporter population. Therefore, the tax agency reconciliation underreporter population would be 1.1 million taxpayers (2.3 million x 46 percent). Since underreporter costs are only associated with cases that IRS worked, and it handled 78 percent of its cases in 1991, we estimated that IRS would have worked 858,000 of the 1.1 million tax agency reconciliation eligible cases. IRS estimated that it cost about $17.61 to work and close a tax year 1991 underreporter case. Thus, the cost savings of not having to work these underreporter cases would be about $15.1 million (858,000 x $17.61). Table II.6 shows how we calculated this cost-savings estimate. Data used to calculate underreporter cases Number of taxpayers with wage, interest, dividend, pension, and unemployment income in tax year 1992. Number of taxpayers eligible for tax agency reconciliation filing system. Ratio of taxpayers eligible for tax agency reconciliation filing to total taxpayers (51 million/112 million) Number of taxpayers who IRS identified as potential underreporters of wage, interest, dividend, pension, and unemployment income in tax year 1991 Number of taxpayers eligible for tax agency reconciliation filing who were identified as potentially underreporting their income in 1991 (.46 X 2.3 million) Number of tax year 1991 underreporter cases with wage, interest, dividend, pension, and unemployment compensation that were worked by IRS. Percent of underreporter cases worked of total cases created (1.8 million/2.3 million) Estimated number of underreporter cases worked by IRS that also qualify for tax agency reconciliation filing Average cost to work an underreporter case Savings from not working tax agency reconciliation underreporter cases ($17.61 X 858,000) IRS has developed estimates of the average amount of time it takes taxpayers to complete and file various types of tax returns and schedules. IRS breaks the return preparation time into four tasks: (1) recordkeeping; (2) learning about the law or the form; (3) preparing the form; and (4) copying, assembling, and sending the form to IRS. Table II.7 shows the average amount of time to complete the four tasks by the three types of individual income tax returns (forms 1040EZ, 1040A, and 1040) and for the Schedule EIC, Earned Income Credit (Qualifying Child Information). Total hours needed to complete form type 49 min. 1 hr. 20 min. 40 min. 2.9 hr. 2 hr. 14 min. 2 hr. 51 min. 35 min. 6.7 hr. 2 hr. 53 min. 4 hr. 41 min. 53 min. 11.6 hr. 2 min. 4 min. 5 min. .2 hr. Schedule EIC is filed with forms 1040A and 1040 only. Table II.8 shows that the 51 million taxpayers eligible for a tax agency reconciliation filing system spent an estimated 316.3 million hours completing and filing returns. Total time and numbers of taxpayers in million Amount of time spent to prepare form (per taxpayer) 2.9 hr. 55.1 hr. 6.9 hr. 60.0 hr. 6.7 hr. 96.5 hr. 11.8 hr. 14.2 hr. 11.6 hr. 90.5 hr. 316.3 hr. On the basis of IRS’ 1992 SOI data, we estimated that about 16.6 million of the 51 million taxpayers had their returns completed by paid preparers. We estimated that these taxpayers would have spent an estimated 54.1 million hours preparing their returns. Thus, we estimated that the 51 million taxpayers would have spent 262.2 million hours (316.3 million minus 54.1 million) preparing tax returns. To determine the number of hours taxpayers would spend on return preparation tasks associated with a tax agency reconciliation system, we analyzed the potential population of tax agency reconciliation filers by the types of income they reported and their filing status. We estimated that 30 million of the 51 million taxpayers had income from wages, interest, and unemployment compensation, their filing status was either single or married filing joint returns, and they had no dependents. Taxpayers with these characteristics would have return preparation tasks associated with taxpayers who could file Form 1040EZ. The remaining 21 million taxpayers reported income that included either dividends or pensions or had dependents. Taxpayers with these income characteristics, regardless of filing status, would have return preparation tasks associated with taxpayers who could file Form 1040A. Using these data, we estimated the time required for preparing tax returns under a tax agency reconciliation system. Recordkeeping: We assumed the 30 million taxpayers who had Form 1040EZ characteristics and the 21 million taxpayers who had Form 1040A characteristics would have the same recordkeeping time IRS estimated for these forms, as shown in table II.7. We estimated that the 51 million taxpayers would spend 24.6 million hours on recordkeeping tasks. Learning about the law or form: We made the same assumptions for this task as we did for the recordkeeping task. We estimated that the 51 million taxpayers would spend an estimated 71.4 million hours on this task. Preparing the form: To estimate the average time that would be spent on preparing the taxpayer information form, we used IRS’ time estimates for completing Schedule EIC. Schedule EIC contains information on the identity of qualifying children for taxpayers claiming the earned income tax credit, which is similar to the information that would be contained on the taxpayer information form. IRS estimated that it takes taxpayers 4 minutes to complete Schedule EIC. We assumed that taxpayers could take twice as long to complete the taxpayer information form because taxpayers may have to enter twice as much data on the taxpayer information form. Therefore, we estimated that the 51 million taxpayers would spend an estimated 6.8 million hours preparing the form. Copying, assembling, and sending the form to IRS: For this task, we also used IRS’ average time estimates for Schedule EIC, which was 5 minutes. We assumed that since the taxpayer information form is a one-page form like the Schedule EIC that the amount of time would be the same. Therefore, we estimated that the 51 million taxpayers would spend about 4.3 million hours on this task. In total, we estimated that taxpayers eligible for a tax agency reconciliation system would spend about 107.1 million hours on preparing tax returns, which is 155.1 million hours (262.2 million hours minus 107.1 million hours) less than estimated for preparing tax returns prepared under the current return filing system. Kathleen Seymour, Evaluator-in-Charge Jack Erlan, Senior Evaluator Sharon Caporale, Evaluator David Elder, Evaluator Tre Forlano, Evaluator Eduardo Luna, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the possible benefits, impediments, and costs of establishing a tax agency reconciliation filing system, focusing on: (1) the estimated number of taxpayers that would not have to prepare returns in such a system; (2) the system's operational characteristics; (3) potential advantages and disadvantages to taxpayers and the Internal Revenue Service (IRS); and (4) major impediments to and costs in establishing this type of system under existing federal tax laws. GAO found that: (1) as many as 51 million taxpayers, or 45 percent of all taxpayers who filed 1992 tax returns, would not have to prepare returns if IRS established a voluntary tax agency reconciliation system; (2) the general operational concept for such a system would be for IRS to produce and mail tax returns based on taxpayer-supplied information about income, filing status, and dependents, and then have taxpayers review the returns and notify IRS if they agreed with the information; (3) the reconciliation system could reduce taxpayers' time and cost to prepare returns and reduce IRS processing and compliance costs, but could adversely affect such parties as tax preparers; (4) a major operational impediment to a reconciliation system is that IRS does not currently process information returns in sufficient time to send taxpayers their tax returns before the return filing due date; (5) IRS has two initiatives under way to speed up information returns processing; (6) a 1987 IRS study found that a tax agency reconciliation system was not then feasible, but indicated that such technological advances as electronic filing of tax data may make such a system feasible in the future; and (7) taxpayers may be reluctant to rely on IRS to prepare their tax returns, may not trust IRS to accurately calculate their taxes, and may not get their refunds as early in the tax filing system as they currently do. |
Congressional oversight is the review, monitoring, and supervision of federal agencies, programs and policy implementation. This oversight provides the legislative branch with an opportunity to inspect, examine, review and check the executive branch and its agencies. Congressional oversight includes two different features—that which is ongoing throughout the course of a year and that which is done at a specific time in the year in response to the issuance of the President’s budget. For the latter, House and Senate committees with jurisdiction over federal programs are required to submit a views and estimates report—a report containing the committee’s comments or recommendations on budgetary matters within its jurisdiction—to its respective budget committees each year within 6 weeks of the submission of the President’s budget. For example, the House Transportation and Infrastructure Committee’s fiscal year 2006 views and estimates report identified a number of aviation- related issues and recommended increased funding over the President’s proposed budget for facilities and equipment to pay for capital improvements designed to increase capacity and reduce aviation gridlock and for airport safety upgrades, including explosive detection systems for airport baggage systems. Ongoing oversight and the specific views and estimates oversight reports can draw information from documents and reports issued by federal departments over the course of the year. Pursuant to the Government Performance and Results Act of 1993 (GPRA) and other statutes, federal agencies produce performance, budget, and financial information for internal management purposes and for reporting to Congress which can also be useful to congressional committees to enhance their oversight efforts. GPRA required federal agencies to develop strategic plans with long-term, outcome-oriented goals and objectives, annual goals linked to achieving the long-term goals, and annual reports on the results achieved. The Chief Financial Officers Act of 1990 (CFO) as expanded by the Government Management and Reform Act of 1994 (GMRA) requires annual audited agencywide statements for 24 major federal departments. In the case of FAA, the agency has made available much of the information and analytic resources that Congress needs to conduct its oversight role. As part of DOT, FAA addresses some of the requirements of GPRA through its inclusion in DOT’s Performance and Accountability Report. However, FAA also produces its own strategic plan, unit-specific business plans and performance reports that identify agency priorities, goals, strategies and progress toward these goals and the success of the strategies employed. Collectively, these documents help Congress determine whether FAA’s goals are aligned with congressional goals and whether FAA is achieving them. Linking performance information to FAA’s budgetary resources, such as FAA is beginning to do in its performance-based budget, can also provide Congress the opportunity to oversee the results planned or achieved with budgeted resources and indicate FAA’s priorities for funding. Used together, these agency documents could assist committees in identifying and tracking progress on the issues related to reauthorization and oversight. FAA manages performance through a series of integrated performance documents. FAA’s principal performance reports are: the strategic plan, called the Flight Plan; unit specific business plans; the annual Performance Accountability Report; and quarterly performance reports. The Flight Plan includes the agency’s mission, goals and strategies. In addition, each of FAA’s lines of business has a unit-specific business plan that outlines how its actions will support the goals and measures identified in the Flight Plan. FAA monitors and reports on the Flight Plan’s key performance targets through quarterly and annual performance reports. FAA’s current 5-year strategic plan, or Flight Plan, is designed to outline the agency’s mission, goals and strategies to achieve these goals through 2009, with the overall aim of seeking “to provide the safest, most efficient aerospace system in the world.” Among other things, GPRA requires agencies to consult with Congress and solicit the input of others as they develop these plans—a good opportunity for congressional committees and staff to influence FAA’s future. According to FAA senior executives, the Flight Plan is the primary document that identifies the agency’s priorities and performance expectations and is the driver of decision making at all levels. As such, the Flight Plan is key for internal agency and congressional oversight purposes. Committees can refer to the plan to determine whether national priorities are appropriately recognized and to raise questions about whether the strategies laid out are likely to lead to success. The Flight Plan identifies four strategic goals (see table 1), each of which are supported by objectives, strategies, initiatives, and performance targets the agency is responsible for achieving. FAA’s Flight Plan can be accessed via its Web site at http://www.faa.gov/about/plans_reports/. Committees could use the strategic plan to identify oversight questions. For example: Do these goals take into account legislative priorities? Are the strategies that support each goal consistent with legislative decisions? How effective are the strategies in achieving these goals? How were the specific initiatives and performance targets for each objective strategy developed? What key factors—external to FAA and beyond its control—exist and how will FAA mitigate or leverage them as appropriate, if they affect the achievement of the strategic plan goals? Does the plan include strategies for working with stakeholders (e.g., airlines, local governments or airport authorities)? The agency’s Flight Plan is supported by unit-specific performance plans, called business plans. Each line of business and staff office produces annual business plans that demonstrate strategic alignment with the agency Flight Plan and define core business activities. The business plans are important tools for oversight because they provide a detailed description of the activities and responsibilities of each business line in supporting the Flight Plan. Specifically, the business plans define the Flight Plan’s performance targets, the specific initiatives that support the performance targets—and type of support required of each line of business (e.g., lead responsibility or support responsibility)—outline the key strategic activities in support of those initiatives; and define strategic activity targets to help gauge progress towards achieving the strategic initiative. The business plans can be found on FAA’s external Web site, at http://www.faa.gov/about/plans_reports/business_plan2005/. For example, the Air Traffic Organization’s (ATO) fiscal year 2005 business plan details six strategic initiatives it is employing to help the agency meet its goal to reduce General Aviation (GA) fatal accidents. Each of the strategic initiatives, which indicates whether ATO is the lead business line or is supporting other business lines, includes related strategic activities and activity targets that enable the ATO to further define and measure the degree its performance is contributing to overall agency performance. Table 2 shows an example of one of ATO’s strategic initiatives, activities and activity targets for supporting a Flight Plan goal. Committees can use the business plans to identify oversight questions and additional reports that could be made available to them. For example: How will the information from these reports affect the strategies for reducing accidents? Do the activities being implemented match congressional priorities? FAA annually publishes a detailed account of agency performance, including its audited annual financial statements, in its Annual Performance and Accountability Report (PAR). While this report is not required, FAA believes it is essential to clearly and fairly present and discuss FAA’s finances and performance. GPRA requires agencies to measure performance toward the achievement of their goals and report annually on their progress in program performance reports. If a goal was not met, the report must provide an explanation and present the actions needed to meet any unmet goals in the future. These reports provide important information to agency managers, policy makers, and the public on what each agency accomplished with the resources it was given. FAA’s PAR provides Congress with annual and historical trend information for its key performance goals. For example, under the strategic goal Increased Safety, FAA has a performance target tied to its goal to reduce the number of operational errors. Figure 1 shows the trend in the actual number of operational errors between fiscal year 2002 and fiscal year 2005. FAA exceeded its target number of operational errors in fiscal year 2003 by 38 and again in fiscal year 2004 by 8. Based on this, potential questions for oversight could be: What are the primary causes of operational errors? What changes were put into place between fiscal year 2003 and 2004 to decrease operational errors? How was the target for 2005 set and what efforts will be put into place to meet this target? The financial statements, supplementary information, and notes to the financial statements included in the PAR present historical information, showing the financial activity of the agency for the last 2 fiscal years and the financial position as of the end of each of those years. The five principal financial statements include: consolidated balance sheets, consolidated statements of net cost, consolidated statements of changes in net position, consolidated statements of budgetary resources, and consolidated statements of financing. The notes to the financial statements present more detailed information about transactions or conditions reflected in these statements. Often the Management’s Discussion and Analysis section of the PAR will address the kinds of operating conditions or changes that financial statement analysis discloses. The statement of budgetary resources, which interrelates with the other financial statements, includes key information that is also included in the agency’s budget. This information is subjected to audit scrutiny, providing some assurance of the reliability of related budgetary information. The individual statements and examples of how they can be used for congressional oversight are discussed in appendix III. The independent auditor’s report included in the PAR tells readers whether or not, or to what extent, the information provided in FAA’s financial statements and related notes is, in the opinion of the auditor, fairly stated. This report also includes the auditor’s statements on whether FAA had effective internal control over financial reporting and over compliance with laws and regulations, which would indicate whether financial management issues need more attention. They also report on any identified significant matters of noncompliance with selected provisions of applicable laws and regulations. In effect, the audit report is a report card on how well the agency is managed from a financial perspective. The auditor’s unqualified opinions on FAA’s financial statements for fiscal years 2002 through 2005 suggest that those statements are sufficiently reliable to be used as a tool for public and congressional oversight. However, the auditor’s reports for each of those years disclosed that FAA’s financial management systems did not substantially comply with federal financial management systems requirements under the Federal Financial Management Improvement Act of 1996 (FFMIA), an issue that may warrant additional oversight. For fiscal years 2004 and 2005, the auditor noted, among other things, that in connection with FAA’s conversion to Delphi as its core financial system, several key financial systems that feed or support Delphi exhibited weaknesses regarding function, reporting or internal control. In addition, the auditor reported that in 2005 FAA, also in conjunction with the implementation of Delphi, had not timely processed all of its transactions and reconciled all of its key accounts. Similar problems had been reported for fiscal year 2004 by the auditor. While adjustments to the recorded balances were made during the preparation of the year end audited financial statements, these weaknesses could indicate that the agency’s financial information during the year may not be fully reliable. Committee staff could use information from FAA’s independent auditor to facilitate an understanding of financial management and compliance issues, addressing questions such as: Can users rely on the information provided in FAA’s financial statements? Did FAA have effective internal control over financial reporting and compliance with laws and regulations? Did FAA’s independent auditor report on any identified significant matter of noncompliance with applicable laws and regulations? Did FAA’s financial management improve or deteriorate over the fiscal year? The answers to the above questions are also key to assessing the reliability of cost accounting information, which is discussed later. Cost accounting information generated from FAA’s financial reporting systems is essential to managing on-going agency operations and provides useful information to Congress about the cost of specific programs, activities, or outputs. FAA’s annual Performance and Accountability Report can be accessed via FAA’s Web site at http://www.faa.gov/about/plans_reports/. In addition to annual performance reporting, FAA monitors and reports quarterly on performance towards the strategic goals through the tracking of 31 key performance measures. FAA management conducts monthly, day- long meetings with executives from each line of business. At these meetings, the designated leaders for each of the four strategic goals present information related to the performance targets for their goal. Each of the 31 performance targets is displayed using the traffic light graphics colors of red, yellow, and green. When a target is either yellow or red, the goal leader will discuss the steps needed to get to green—which indicates that the performance measure is met. Committees could use these reports to raise similar questions about ways to improve performance to achieve the performance target. FAA reports performance for these 31 measures on its external Web site quarterly, at http://www.faa.gov/about/plans_reports/Performance/. For example, under the strategic goal Increased Safety, FAA has a performance target tied to its goal to reduce the number of GA fatal accidents. FAA’s target for fiscal year 2005 is not to exceed 343 GA fatal accidents. However, according to its final quarterly performance report for fiscal year 2005 published on the FAA Web site, the agency failed to meet its target, with a total of 350 GA fatal accidents, 7 fatal accidents above the target. Figure 2 shows the quarterly report for FAA’s measure on GA fatal accidents. Committees could use this performance information to identify oversight questions. For example: Why was FAA unable to meet its target for fiscal year 2005? What has the agency been doing to improve on its performance for this target? Does FAA measure the number of nonfatal GA accidents? If so, how does it use those data? In addition, questions could be raised about the measure itself. For example, why does the measure track the number of GA fatal accidents rather then the rate of GA fatal accidents? The annual federal budget is developed using a year-round administrative process of budget preparation and review. By the first Monday in February, the President submits a budget request to Congress for the fiscal year starting on the following October 1. However, preparation of that particular budget request began about 10 months before it was submitted to Congress. For example, for the fiscal year 2006 budget request, transmitted to Congress in February 2005, the budget process began in the spring of 2004. Thus federal agencies deal concurrently with three fiscal years: (1) the current year, that is, the fiscal year in progress; (2) the coming fiscal year beginning October 1, for which they are seeking funds; and (3) the following fiscal year, for which they are preparing information and requests. In the spring and summer, agencies work with the Office of Management and Budget (OMB) to identify major issues for the upcoming budget request, develop and analyze options for the upcoming reviews of agency spending and program requests, and plan for the analysis of issues that will need decisions in the future. In September and October agencies submit their budget requests and other initial materials to OMB, typically on the first Monday after Labor Day of the year prior to the start of the year that the budget request covers. From October to December OMB reviews and briefs the President and senior advisors on the proposed budget policies and recommends a set of proposals after reviewing all agency requests. Budget decisions are passed back to agencies in late November and may be appealed. Final budget decisions are transmitted to Congress in the President’s budget request. At the same time an agency is working to formulate a new budget, it is executing its approved budget by spending the money Congress has appropriated to carry out the objectives of its program legislation. During the budget execution phase, agencies sometimes find they need more funding than appropriated because of unanticipated circumstances. Under such circumstances, agencies may request and Congress may enact a supplemental appropriation. FAA manages and reports budget decisions in several documents that could be used to enhance oversight. The three principal budget documents include the annual budget, the budget-in-brief and the performance-based budget justification. FAA’s annual budget presents actual receipts and spending levels for the fiscal year just completed, current year estimated receipts and spending, and estimated receipts and spending for the upcoming year as proposed by the President. The budget-in-brief summarizes the justification for FAA’s estimated budget by strategic goal. Finally, FAA’s performance-based budget justification provides a more detailed outline of its planned budget according to the Flight Plan’s strategic goals and describes the expected performance improvements. The fiscal year 2006 budget reports the total funding for all FAA programs and provides program and financing information by budget account. FAA’s budget has four components: operations; facilities and equipment; grants- in-aid for airports; and research, engineering, and development. There are two sources of FAA funding: the airport and airway trust fund, which contains ticket tax and other earmarked receipts, and general fund appropriations. In fiscal year 2006, the trust fund provides all funding for facilities and equipment; the airport improvement grants; and research, engineering and development, as well as partial funding for operations. The general fund is also used for operations and other, smaller accounts. Many different analyses can be done with budget data to identify oversight questions. For example, as shown in figure 3, fiscal year 2000 general fund financing of operations and maintenance increased from its pre-2000 level. Further, figure 4 shows that trust fund outlays have outpaced receipts since fiscal year 2002, resulting in a decline in the trust fund balance. Based on these analyses, some oversight questions could be: What steps are being taken to understand the cost drivers of the operations and maintenance portion of budget? What is the desired balance between trust fund and general fund financing for FAA operations? FAA’s budget-in-brief is a publicly available summary of FAA’s budget justification. The budget-in-brief summarizes the FAA’s annual budget request by appropriation and by goal area. It provides committees with a quick comparison of resource allocation by goal and program activity for the prior year, current year and the budget year. For example, the budget- in-brief states that safety is FAA’s primary goal and proposes spending 71 percent of the fiscal year 2006 request for the safety-related goals shown in table 3. For the goal of reducing GA fatal accidents, FAA is proposing a decrease from fiscal year 2005 in resources for facilities and equipment, and grants-in-aid for airports, and in the number of full time equivalent employees devoted to this goal. Based on this, potential questions for oversight could be: What changes were made in these areas to permit a reduction in funding while still making progress toward the goal of reducing GA fatalities? How is the decrease going to affect more ambitious targets for GA fatal accident reductions in performance plans? Was funding shifted from reducing GA fatal accidents to a different safety-related activity? If so, which activity and why? As table 3 shows, at the same time FAA is proposing decreases in certain types of spending for reducing GA fatal accidents, FAA is proposing budget increases for reducing commercial fatal accident rates and achieving zero commercial space accidents. Another potential oversight question could therefore be: is FAA proposing increases in these other areas—where FAA is meeting its performance targets—while proposing decreases in reducing GA fatal accidents, a goal for which FAA is not meeting its performance target? FAA’s budget-in-brief can be accessed on its external Web site at http://www.faa.gov/about/budget/. FAA’s performance-based budget, first done in fiscal year 2005 and submitted to the appropriations committees, is a prominent source of both performance and budgetary information on FAA and could also be useful for oversight. It highlights FAA’s identified resource needs and what the agency deems to be the most important performance goals for that particular year. One goal of agency performance budgets is to show the relationship between resources and incremental improvements in performance. Congressional oversight could focus on whether planned performance improvements were achieved with the resources provided or, if not, raise questions about why they were not achieved. For example, FAA’s fiscal year 2005 performance-based budget shows a request for $10.2 million to reduce GA fatalities through the implementation of the Flight Services Automation System (FSAS) and Operational and Supportability Implementation System (OASIS). According to the budget, FSAS and OASIS will enable flight specialists to more efficiently provide weather and flight information, thereby aiding in the reduction of accidents through increased pilot awareness of weather conditions along the flight route. Committees could use information from the performance-based budget to oversee spending on and installation of the systems. For example: Was the installation completed within the originally estimated funding level? What percentage of GA fatal accidents results from the pilots’ insufficient knowledge of weather conditions? Are the GA fatality rates decreasing in areas where the installation has occurred? FAA also produces some long-term planning and budget documents that could be helpful for oversight. Intended to integrate and coordinate longer- term perspectives and needs of organizations affecting airspace usage, these documents are: National Plan of Integrated Airport Systems, The Next Generation Air Transportation System Integrated Plan, and Capital Investment Plan. The first three plans were cited by FAA officials as key documents presenting FAA’s long-term direction. The National Plan of Integrated Airport Systems for 2005 to 2009 identifies 3,344 airports that are significant to national air transportation and, therefore, eligible to receive grants under the FAA’s Airport Improvement Program. The plan and grant program support the Flight Plan’s goals of increased safety and greater capacity. The plan describes the condition and performance of the airport system according to six performance areas: safety, capacity, pavement condition, financial performance, surface accessibility, and noise. In addition, the plan provides cost estimates for needed improvements to airports by airport type—large, medium or small hub primary; no hub primary; non-primary commercial service; relievers; or general aviation— and by purpose of development—safety, security, reconstruction, standards, environment, airfield capacity, terminal buildings, ground access, and new airports. The projects are not prioritized, but inform the grant decisions for the Airport Improvement Program. The National Plan of Integrated Airport Systems for 2005 to 2009 can be accessed on FAA’s Web site at http://www.faa.gov/airports_airtraffic/airports/planning_capacity/npias/. Based on this plan, some oversight questions could be: How are the projects in the National Plan of Integrated Airport Systems selected for airport improvement grants? To what extent have the grant-funded improvements to airports achieved performance improvements for the Flight Plan goals of increased safety and greater capacity? The Operational Evolution Plan, created in collaboration with the aviation community, the Department of Defense, the National Weather Service and the National Aeronautics and Space Administration, is a rolling 10-year tactical implementation plan designed to increase the capacity and efficiency of the national airspace system by approximately 30 percent within its initial 10-year horizon. The plan identifies four specific areas for improvement: terminal area, en route, and airport congestion; and air traffic management flow efficiency. It also identifies milestones for expected improvements at each of the airports included in the plan. The Operational Evolution Plan can be accessed on FAA’s Web site at http://www.faa.gov/programs/oep/. Based on this plan, some oversight questions could be: Are the milestones for expected improvements realistic and are they being met? As airport improvements are completed what has been the impact on congestion? Are the changes as great as anticipated? The Next Generation Air Transportation System Integrated Plan is a multiorganization plan designed to transform the nation’s air transportation system to meet expected needs in 2025. This plan outlines eight transformation strategies that will be researched, developed, implemented and maintained by teams composed of federal, state, and local governments; quasi-government research institutions; universities; and the private sector. For each strategy there is a description of the research area and milestones for completion. Based on this plan, some oversight questions could be: How do the strategic goals and performance targets in the Flight Plan and unit-specific business plans relate to these transformation strategies? How were these transformation strategies identified? FAA also reports on long-term capital financing options in the Capital Investment Plan (CIP), which is a rolling 5-year financial plan that allocates planned funding to NAS projects. The Secretary of Transportation transmits the CIP to Congress each year at the time of the President’s annual budget submission. It includes estimated expenditures for each line item in the facilities and equipment budget for the current fiscal year and for the following 4 years. However, the CIP includes only projects that are likely to receive funding rather than all initiatives originally considered. According to the CIP, a project’s planned funding is based on its support for the agency’s strategic goals and performance targets. As such, the CIP is an important oversight tool because it not only details estimated expenditures, but also provides the agency’s rationale for spending federal dollars on specific projects—or a group of related projects—and explains how such spending will enhance the agency’s ability to meet its strategic goals, and ultimately its mission. Based on this plan, potential questions for oversight could include: Are the projects clearly linked to agency goals and priorities? What other projects could meet these goals and priorities? Why were they rejected? Financial accountability goes beyond an agency’s obtaining an unqualified opinion on its annual financial statements. The key to financial accountability is obtaining accurate and useful information on a timely and ongoing basis to support day-to-day managerial decisions and oversight. As a critical part of its new Delphi financial management system installation, FAA’s cost accounting system (CAS) draws upon accounting information in Delphi to provide financial information that can be used to monitor ongoing operations as well to plan for the future. CAS has been principally implemented in the ATO and Commercial Space Transportation, which together comprise over 80 percent of FAA’s budget. FAA’s other two lines of business, Aviation Safety and Airports are expected to implement CAS in fiscal year 2006. CAS takes direct cost data from DOT’s financial management system and allocates those costs from the organization that incurred the costs to the organization, product, or service that benefited from the costs. The system allows analysis of costs aggregated within a program, activity, location or strategic goal. Allocated costs can also be used in an analysis of comparative operating efficiency for different operating periods or different locations. An example is a ratio of costs to a nonfinancial activity measure, such as cost per day, per employee, or per flight. Apparent abnormalities in trends or at particular locations may then be investigated. For example, at FAA the direct cost of an air traffic controller at a terminal would be allocated to airport operations, in proportion to takeoffs and landings, which are a major “driver” of those costs. Similarly, the indirect cost of a maintenance technician would be allocated to the lines of business that benefited from those costs using an appropriate allocation base. A financial scoreboard in use at FAA regularly tracks trends in these unit costs, overhead rates, and other performance measures. Tracking these trends is key to identifying operating inefficiencies and, when projected to anticipated operating volumes, can help determine future financing needs. According to FAA, CAS provided labor and overhead cost data which were used in the preparation of a competitive sourcing study for ATO flight service stations. The cost data were used as a basis to estimate the future cost of those existing in-house flight services. Comparison of those projected in-house costs to the costs of procuring the services from bidders in the private sector resulted in contracting out ATO Flight Service Stations in fiscal year 2005 at a projected contract savings of about $2.2 billion through fiscal year 2015. FAA has also reported that CAS data led to cancellation of a $27 million airport weather program and to savings of $7 million from modification of an airport radar surveillance program. CAS can break down the full costs for the individual activities undertaken to provide each of ATO’s services—En Route, Oceanic, Flight Services, and Terminal Services—by location, program and function. Using this kind of information, a separate fiscal year 2004 performance report prepared by ATO displayed unit costs of certain activities and services as well as some overall ATO revenue and cost trend analyses and other performance measures. The report cited a reduction of ATO’s total unit cost per flight by $17, or 4.21 percent. This type of report is a tool for ongoing congressional oversight, addressing key operating issues identified by ATO management. Committees could use information from FAA’s cost accounting system to better understand costs and performance of individual programs, activities, or outputs, addressing questions such as: What is the total cost of ATO services per flight? How do this year’s costs per flight compare to last year’s? How does the per flight cost of traffic controllers compare among airports? CAS can be used to link costs to strategic performance areas and to combine air traffic safety data with financial information. FAA has also used cost finding techniques for selected programs during the fiscal year 2006 budget cycle to estimate the marginal cost of performance, i.e., the incremental results that might be achieved at different levels of funding. Through its legislative support agencies—GAO, Congressional Research Service and the Congressional Budget Office—and the Department of Transportation’s Inspector General, congressional committees also have access to considerable resources for oversight. See appendix II for a summary of additional information resources. GAO, as the investigative arm of Congress, examines the use of public funds; evaluates federal programs and activities; and provides analyses, options, and other assistance to help Congress make effective oversight, policy, and funding decisions. Several documents that GAO produces on an ongoing basis or as part of a body of work may prove useful to congressional committees when setting an oversight agenda. GAO Strategic Plan (2004-2009) GAO’s strategic plan, which has been updated every 2 years since 2000, describes the trends and issues that are likely to affect congressional decision makers over the 6-year period of the plan. It also provides GAO’s plans for analyses and other activities to help support Congress’s information needs. One of GAO’s strategic objectives is to support congressional and federal efforts to obtain and maintain a safe, secure, and effective national physical infrastructure. Several performance goals under this objective involve transportation-related issues, including assessing efforts to improve safety and security in the nation’s transportation system and assess the impact of transportation policies and practices. As such, oversight committees can look to GAO for information on these issues and more. High-Risk Series: An Update Since 1990, GAO has periodically reported—generally at the start of each new Congress—on government operations it identifies as having a high risk of fraud, waste, abuse, and mismanagement. Increasingly, the list has grown to include programs or agencies that need urgent attention or transformation, such as the Department of Homeland Security. In the January 2005 update, GAO presented the status of areas previously identified as high-risk. These included two involving FAA—FAA Financial Management and FAA Air Traffic Control modernization. We determined that FAA’s progress in improving financial management overall, a high-risk area since 1999, has been sufficient to remove it from the list. However, while FAA had made progress in addressing root causes of problems with its Air Traffic Control modernization, originally designated as high-risk in 1995, we maintained the high-risk designation. Therefore, the status of FAA’s Air Traffic Control modernization may be an area for oversight by the Transportation and Infrastructure Committee. 21st Century Challenges: Reexamining the Base of the Federal Government In February 2005, GAO issued a report on 21st century challenges facing the nation—including the federal government’s long- term fiscal imbalance and changing demographics—that suggests the need to reexamine the base of the federal government. The report is intended to help Congress address these challenges by providing a series of illustrative questions, both generic and for 12 examination areas that could help support a fundamental and broad-based reexamination initiative. One of the 12 examination areas we identified is transportation, in which the report describes FAA’s challenge in addressing the declining revenues in the Aviation Trust Fund and how that could affect funding for the agency. Committees could ask the related illustrative question: Should the federal government continue to provide public financing to stimulate private financing in areas such as aviation, where a mix of private and public beneficiaries exists? In addition, through our review of federal programs and activities, we have a large body of work on aviation issues, FAA management, programs, and performance. Further, committees can also request additional evaluations to address issues of further interest. Recent examples of these reports include the following: National Airspace System: Initiatives to Reduce Flight Delays and Enhance Capacity Are Ongoing but Challenges Remain; Airport and Airway Trust Fund: Preliminary Observations on Past, Present, and Future; Air Traffic Control: FAA Needs to Ensure Better Coordination When Approving Air Traffic Control Systems; Air Traffic Control: FAA’s Acquisition Management Has Improved, but Policies and Oversight Need Strengthening to Help Ensure Results; Aviation Safety: FAA Needs to Strengthen the Management of Its Designee Programs; National Airspace System: FAA Has Made Progress but Continues to Face Challenges in Acquiring Major Traffic Control Systems; DOT’s OIG works within DOT to promote effectiveness and head off, or stop, waste, fraud and abuse in departmental programs through audits and investigations. The OIG also consults with Congress about programs in progress and proposed laws and regulations. The OIG also publishes semiannual reports, which summarize its recent audits and investigations. In addition, the OIG annually reports on the top management challenges facing DOT. DOT’s Top Management Challenges report can be found at: http://www.oig.dot.gov/item.jsp?id=1701. Three challenges identified in the most recent management challenges report by the OIG, relate wholly to FAA. Mitigating flight delays and relieving congestion—actions needed to meet demand. The OIG report states that the growth in aviation operations has brought an increase in the number of aviation delays, with the incidence, rate, and length of delays in the summer of 2005 approaching 2000 levels, generally regarded as the worst summer of aviation delays. The report states that DOT will need to develop a toolbox of relief measures to use including new construction, technological improvements, procedural changes, administrative controls, and market-based solutions. The report also states that new runways provide the most increases in capacity, and that DOT and FAA will need to ensure the navigation equipment and airspace modifications are in place before the eight new runway projects, planned to be completed by 2008, are constructed. Finally, FAA will need to continue to consider the use of market-based solutions to mitigate congestion, such as schedule caps and congestion pricing. Reauthorizing aviation programs—establishing requirements and controlling costs are prerequisites for examining FAA financing options. The OIG report states that a major focus of the FAA over the next year will be preparing to reauthorize a wide range of aviation programs and exploring alternative financing mechanisms. Challenges facing FAA include (1) controlling costs with major acquisitions by delivering new systems that work, are on time, and are within budget, and by making decisions on the scope of billion-dollar projects that have been delayed for years; (2) getting control of support service contracts, reducing associated costs, and following through on the implementation of new procedures; (3) establishing requirements for the next generation air traffic management system; (4) addressing the expected surge in controller attrition and negotiating an affordable and equitable bargaining agreement; and (5) completing a cost-accounting system to reduce costs and improve operations. Aviation safety—developing effective oversight programs for air carrier operations, repair station maintenance, and operational errors. The OIG report states that the FAA maintains an impressive safety record, but still faces challenges with air carrier and repair station oversight as a result of financial uncertainty, competition from low-cost carriers, and rebounding air traffic. Further, the report states that the FAA experienced an increase in the number of reported operational errors—when planes come too close together in the air—over the past year, and at additional locations where operational errors were not reported. Effective communication among agency officials, Members of Congress and congressional staff is needed to ensure that information agencies provide meets committee needs. While considerable information resources are available, they may not be available in a manner that is useful to committees. We have previously reported, in a review of interactions between the Congress and other executive branch agencies, that although agencies collect and produce a great deal of useful information, much of it did not reach the interested congressional committees, and the information that did reach the committees was difficult to digest, too highly aggregated, or was received too late to be useful. While FAA provides a great deal of information on its Web site, enhancing access to agency information using technology can improve the timeliness and usefulness of agency information to the Congress. For example, information alerts and summaries from the agency could be effective information sharing tools. Further, regular meetings between committees, staff and agency officials could identify the committee’s principal oversight objectives, provide a forum to discuss the issues, and develop the best approaches to meet them. Providing relevant agency information using technology solutions can improve committee access and minimize the effort required of agency staff. House Transportation and Infrastructure Committee staff indicated that FAA has a large quantity of information available and effective communication between the staff and the agency, but it is also interested in using technology to gain additional, timely access to agency data when conducting oversight. From our discussions with committee and agency staff, improving access through technology solutions could meet the needs of both groups. Access to information could be improved by A For Congress page on FAA’s Web site, A Frequently Asked Questions section on the For Congress Web site, A Web site subscription service notifying committee staff when relevant information has been updated, and Moderated access rights to selected FAA documents. Several applications allowing Web-based access to information could benefit both the committee seeking information as well as the agency that provides information. For example, as a result of our discussions with committee and agency staff, FAA has initiated a For Congress page on its Web site. The page provides a single point of access for information committee staff identified in our discussions as relevant for oversight, as shown in figure 5. In addition, following a recommendation contained in our draft report, FAA added a subscription e-mail service to notify congressional users about new information available, such as new press releases and speeches by agency officials. We had pointed out that a subscription service could enhance the timeliness in which Congress receives information for oversight. For example, a subscription service notifying committees when notices of proposed rulemakings and other regulatory or policy guidance documents are published would give committees relevant information in a timely manner. The For Congress Web site could be further improved by including a Frequently Asked Questions (FAQ) section to provide information often requested by committees. According to a manager within FAA, the agency provides a great deal of budget information to Congress in response to questions for the record (QFRs) that are submitted by the appropriations committees of both chambers. However, the agency response is shared only with the requesting committee, even though it could be useful to all committees involved in oversight. In addition, many of these QFRs, as well as other requests for information, are handled in an ad hoc manner by individual FAA officials. When similar requests for information arrive, FAA officials often have to create an entirely new response. An FAA official said they had a general FAQ section, available on the bottom of all FAA Web pages, but it does not include the QFRs, or other questions regarding FAA planning, budgeting or performance. A FAQ section on the For Congress Web page could minimize agency efforts by allowing it to post requested information once, rather than tying up valuable time and resources by repeatedly responding to similar questions. In addition, sharing agency responses to congressional information requests could enable quick access to information likely to assist in other congressional efforts. Other uses of technology, such as granting moderated access rights to selected FAA documents, could also enhance committee access to information. Moderated access would allow increased access of FAA information to committee staff, beyond what is available on the agency’s public Web site. To provide moderated access, individual committee staff would be issued accounts or use passwords to obtain access to information restricted to congressional users. The content allowed through the moderated access would be negotiated between the agency and committee. One way for committees to identify documents that are available would be to provide increased search capabilities on the FAA Web site. Increasing the Web site search capability would allow committees to identify what information exists, even if the entire document content was not immediately available. Using this knowledge of what information exists, committees could better identify exactly which of the information they would like to have made available through moderated access. We have previously reported in a review of interactions between Congress and other executive branch agencies, that communication between committees and agency staff is often one-way, with little opportunity for direct discussion. According to Transportation and Infrastructure Committee staff, they generally contact the agency when they have a specific question, on an ad hoc basis. Transportation and Infrastructure Committee staff and experts we interviewed said constant communication with agencies within the committee’s jurisdiction, both formal and informal, could contribute to successful oversight. Developing a routine schedule of meetings could create a degree of certainty for both parties that issues important to each will be discussed. The timing, frequency, attendees, and agenda items could be negotiated in advance by both parties. Meetings could serve several purposes—they could be used to identify the committee’s principal oversight objectives, provide a forum to discuss the issues, and develop the best approaches to meet them. Agency officials that we spoke with also supported regular meetings with committees. An FAA official said establishing an effective way to regularly communicate with Transportation and Infrastructure Committee staff would better enable FAA to directly inform the committee about emerging issues, whereas now the committee often relies on third party analysis and information. They understood that such meetings were not only opportunities for the committee to improve its oversight capacity, but also were opportunities for the agency to identify issues that may have received less attention and to help put the large amount of performance, budget, and financial information in a broader context so that committees can better understand the agency’s operations. The potential benefits of regular committee and agency staff meetings were evident during the constructive discussions coordinated by GAO for this report. In order to conduct effective oversight of federal agencies and programs, congressional committees need access to timely and useful information. The types of information we identified as available for FAA management could also be used for oversight. Moreover, these types of information are produced routinely by all federal agencies and could be used by committees of jurisdictions to regularly monitor agency performance. However, as government grows more complex and agencies produce more information, it becomes harder for Congress to access, analyze, and summarize this information to develop its policy positions and legislative enactments. New ways must be continually found to use emerging technology and approaches to make agency information transparent and readily available. But despite the availability of information, and in FAA’s case, its public accessibility, more can be done to make this information readily accessible to congressional committees. In particular, improving access to information via technology solutions like those described in this report could allow congressional committees to access information as needed and minimize the number of duplicative information requests agencies are asked to respond to. In addition, establishing a schedule of routine meetings will provide congressional committees and agency officials with the opportunity to discuss in-depth the issues and challenges facing all federal agencies, including FAA. Establishing a collaborative approach to oversight will allow more consistent, rather than ad hoc, committee oversight. Importantly, these findings constitute lessons learned that may be transferable to other agencies. We recommend the Secretary of the Department of Transportation, direct the Administrator of FAA, to take the following actions to further enhance committee access to FAA information: Continue to work with committee staff to further refine the For Congress Web site by improving the flow of information and taking advantage of emerging technologies; Include a Frequently Asked Questions page on the For Congress site, allowing oversight committees to quickly find answers to commonly requested items relevant to Congress; Add moderated access on the For Congress Web site to allow access to information that should be made available to congressional committees, yet may not be appropriate for the general public; Consider offering regular meetings between the Members of the committee and key staff with senior FAA executives to address matters of mutual concern. We provided a draft of this report to the Secretary of the Department of Transportation for review and comment. We received comments from FAA officials, including the Deputy Assistant Administrator for Financial Services, who indicated that they were pleased to serve as our case study and they would consider the report’s recommendations as they continue to strive for excellence in fulfilling the Congress’ information needs. The officials said that they endeavor to ensure Congress is fully informed of FAA’s planned and ongoing programs and activities, relying on a staff of dedicated professionals who know and understand the needs of Congress to maintain a steady flow of useful information to Congress. The officials also said that they make extensive use of technology to enhance the information available to Congress. They noted that a considerable amount of information is available to Members of Congress and their staff in a section of FAA’s Web site dedicated to serving the information needs of Congress—as our report notes, an improvement developed as a result of discussions between agency and congressional staff during our review. In addition, they indicated they had created a subscription e-mail service to enable committee staff to be notified when information is updated on their Web site, such as with new press releases and speeches by agency officials. As noted earlier, this action was recommended in our draft report; consequently, since FAA has taken these steps, we have eliminated the recommendation from the final report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after its issuance date. At that time, we will send copies of this report to the Secretary of Transportation and will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-6543 if you or your staff have any questions about this report. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and staff acknowledgments are listed in appendix IV. The objectives of this report were to identify (1) information FAA produces that could enhance congressional oversight; (2) other available information resources that could enhance congressional oversight; and, (3) how committee access to FAA’s information could be improved to enhance timeliness and usefulness. To identify the information and delivery mechanisms that would enhance the committee’s ability to oversee FAA programs and management, we met with staff from the U.S. House of Representatives Committee on Transportation and Infrastructure and its subcommittee on Aviation. To identify information produced by FAA that could enhance oversight, we met with FAA senior officials from numerous offices, including several lines of business—Airports; Air Traffic Organization; and Aviation Safety— and staff offices—Aviation Policy, Planning and Environment; Financial Services; Government and Industry Affairs; Human Resources and Management. In addition, we met with officials from the Chief Information Office/Office of Information Services and the Office of Inspector General for the Department of Transportation. To identify information resources external to FAA that could enhance congressional oversight, we met with officials from other government entities such as the Congressional Research Service, the General Service Administration’s FirstGov initiative, and the Office of Management and Budget. In addition, we met with technology representatives from Lexis- Nexis. Finally, we attended meetings with representatives from the Mercatus Center, CATO Institute, and the Heritage Foundation, hosted by the House Committee on Transportation and Infrastructure. In addition, we reviewed FAA performance, budget and financial documents and FAA’s Web site. We also reviewed reports and evaluations produced by analytical agencies and organizations and prior GAO work in this area. Written comments from FAA are included in appendix II. We conducted our work from September 2004 through November 2005 in accordance with generally accepted government auditing standards. FAA’s annual financial statements can be used to analyze the agency’s operating results and its financial position. Most of this analysis involves looking at how various individual reported amounts interrelate or represent the agency as a whole, and how those amounts or relationships change from period to period. The historical information presented can establish a baseline for estimates of future operations and funding needs. Agency financial information can be valuable for facilitating an understanding of an agency’s operations; providing a common database for the development, analysis, and debate supporting an historical perspective from which to evaluate future plans, budgets, and spending proposals; assessing agency accountability for actual results when compared to evaluating program costs. Further information regarding federal financial statements can be found in a guide to the annual financial report of the U.S. Government, published recently by GAO. This guide can be helpful to Congress and taxpayers in evaluating both governmentwide financial reports and those of individual agencies. FAA’s balance sheet shows an end-of-the-year view of its overall financial position, its assets (what it owns), its liabilities (what it owes), and the difference between the two (its net position). A wide variety of analyses can be applied to information presented in FAA’s consolidated balance sheets for fiscal years 2003 and 2004, which are presented in Figure 6. Committee staff could use information from FAA’s balance sheet to facilitate a better understanding of the agency’s financial position, addressing questions such as What are FAA’s largest asset and liability categories? What is the makeup of FAA’s assets and liabilities? What future funding may be required to replace deteriorating operating assets and to satisfy long-term liabilities? For example, as shown in Figure 7, FAA’s two largest asset categories are property, plant, and equipment valued at about $14.5 billion and investments valued at about $10.3 billion. For additional information about the makeup of these assets, the balance sheet refers readers to the related notes. Referring to the related note 6, one can learn that the acquisition value (cost) of personal property (e.g. equipment) increased by $1.3 billion, or 10 percent, from fiscal year 2003 to fiscal year 2004, and that the sizeable increase in the reported cost of property, plant, and equipment includes new acquisitions of National Airspace System equipment. The balance sheet and notes also show that FAA has significant amounts invested in the Airport and Airway Trust Fund but that the balance of these investments fell during fiscal year 2004. A possible inquiry to the FAA might address a relationship between the investment balance and additions to property, plant and equipment. Also disclosed in note 6, the accumulated depreciation of each asset class is one potential indicator of the relative deterioration of those assets. Accumulated depreciation is ultimately limited to the original acquisition value of an asset, and substantially depreciated assets may possibly soon require funding for their replacement. The balance sheet also indicates a significant percentage increase in accounts receivable that are not intragovernmental transactions among federal entities. Though less significant than some of the other amounts shown in the balance sheet, such an increase might warrant a follow-up discussion with FAA regarding its cause and whether this indicates a new trend that will require funding from additional appropriations in the future. FAA’s statement of net cost is intended to show how much it costs taxpayers to operate FAA. Net cost is calculated by subtracting any earned revenues from gross cost, which include program costs as well as administrative costs, resulting in FAA’s costs to taxpayers. As shown in Figure 8, FAA’s statement of net costs presents cost information for each of its four major lines of business – air traffic organization, regulation and certification, airports, and commercial space transportation -- and two categories that are not lines of business, including agency overhead. Committee staff could use information from FAA’s statement of net cost to enhance their understanding of possible future cost trends, addressing questions such as: How much did FAA’s net cost increase or decrease from the prior fiscal year? Which of FAA’s programs experienced the largest increase and which experienced the largest decrease in net cost from the prior fiscal year? Which of FAA’s programs accounted for most of its net cost? For example, FAA’s statements of net cost for fiscal years 2003 and 2004 show that other than a nearly $200 million (6.8 percent) increase in net costs related to the airport program, operating results were substantially consistent for those two years, indicating that future operating costs of FAA’s other business lines may be stable. Based on the airports’ program increase, a reader may decide to perform further analysis using FAA’s statements of net cost from prior fiscal years. As shown in Figure 9, further analysis of the airport program over time indicates that net costs for the program have doubled over the last four fiscal years. This may prompt questions to determine the causes for the increase, whether this growth was expected and, going forward, how much the airport program should continue to grow. FAA provides additional information about the distribution of net costs in note 12 of its financial statements, which is summarized in Figure 10. This information shows that FAA’s most costly line of business was air traffic organization, which accounted for about two-thirds of its net costs. The net cost information provided in note 12 also shows that 72 percent of FAA’s net costs were used to support its strategic goal of safety. Using the information about net costs disclosed by FAA, a reader can consider whether FAA’s current cost distribution appropriately reflects its strategic goals and congressional priorities, or whether resources should be redirected. FAA’s statement of changes in net position shows how it financed its operations for the fiscal year. It shows the agency’s net position at the beginning of the fiscal year, the major inflows and outflows of funds that caused the net position to change during the year, and the ending net position. FAA’s statements of changes in net position for fiscal years 2003 and 2004 are displayed in figure 11. Committee staff could use information from the statement of changes in net position to facilitate a better understanding of FAA’s financial position and direction, addressing questions such as: What were FAA’s primary financing sources and how much did they increase or decrease? To what extent did FAA’s excise tax revenue cover its net costs? Did FAA’s net position improve or deteriorate? For example, FAA’s statement of changes in net position shows that FAA is primarily financed through excise tax revenue and appropriations. However, fiscal year 2004 appropriations used decreased by about 20 percent from the previous year, while excise taxes and associated revenue rose by about three percent, conditions that if analyzed in greater detail, might reveal important information about the agency’s future aggregate spending or income trends. For example, the decrease from fiscal year 2003 to 2004 in appropriations used approximated the amount associated with FAA’s 2003 transferred operations, leading a reader to infer that the two are related. However, analyzing the trend of this information going forward may tell a different story about the agency’s direction. If the trend indicated by FAA’s statement of changes in net position for fiscal year 2004 continues, FAA may be able to meet more of its costs through service fees and excise taxes rather than appropriated funds. Also, the percentage composition of financing sources can be compared to that of other agencies or programs. The statement of budgetary resources presents the amount of budgetary resources available during the fiscal year and the status of those resources at the end of the year. This statement provides basic information about budget authority made available from appropriations, fee collection, and, when applicable, borrowing authority. The relationship of obligations to outlays is also presented for the fiscal year. FAA’s statements of budgetary resources for fiscal years 2003 and 2004 are displayed in figure 12. Committee staff could use information from FAA’s statement of budgetary resources to obtain an overview of the agency’s financial position and direction, addressing questions such as: Were there increases or decreases in budget authority, unobligated budgetary resources, total budgetary resources, obligations incurred, and/or disbursements? To what extent were current fiscal year budgetary resources used? For example, FAA’s statements of budgetary resources for fiscal years 2003 and 2004 show that budgetary authority, budgetary resources, obligations incurred, and disbursements all increased in fiscal year 2004, indicating a possible expansion in FAA’s overall activities for the year. However, FAA’s budgetary resources increased at a faster pace than outlays and obligations, which might indicate a change in FAA’s budgetary needs that should be analyzed further. FAA provides additional information about the use of its budgetary resources in the required supplementary information section of its PAR, which includes a schedule of budgetary resources by major fund type. As shown in figure 13, an analysis of this schedule shows that the operations fund uses the most budgetary resources followed by the grants-in-aid to airports fund and the facilities and equipment fund. In addition, readers may compare the fiscal year 2004 schedule of budgetary resources by major fund type to schedules for prior years. A comparison of the fiscal year 2003 and 2004 schedules included in the 2004 PAR shows that budgetary resources for facilities and equipment grew by 3.8 percent, compared to 8.1 percent growth for grants and 5.0 percent growth for operations. This type of analysis allows for consideration as to whether FAA’s current use of budgetary resources is efficient and reflects congressional priorities. The statement of financing reconciles the resources used to finance an agency’s operations for each fiscal year using budgetary accounting with the net cost of operations determined using the accrual basis of accounting. It explains the differences between an agency’s obligations of budget authority as reported in budget documents and the statement of budgetary resources, and the net cost of its operations as shown in the statement of net cost, indicating the various categories of transactions that are considered when preparing one of those statements but not the other. The statement illustrates the link between budgetary accounting (primarily cash basis), which records obligations when goods and services are ordered, and financial (accrual basis) accounting, which records expenses when goods are consumed and services are received in fulfillment of the agency’s objectives. FAA’s fiscal year 2003 and 2004 statements of financing are shown in figure 14. Committee staff could use information from the statement of financing to facilitate an understanding of FAA’s financial position and direction, addressing questions such as: How much of FAA’s net costs were due to the depreciation of its assets? How much did FAA spend on capitalized fixed assets? For example, FAA’s statements of financing for fiscal years 2003 and 2004 show an increase of 29 percent in resources used to acquire assets, transactions which affect budgetary resources but are not shown on the statement of net costs until they are used up or depreciated, in the case of property, plant, and equipment. As a result, additional oversight may be warranted for the increase in resources being used to finance the acquisition of assets. Bernice Steinhardt on (202) 512-6543 or steinhardtb@gao.gov. In addition to the contact names above, Linda Calbom, Director; Christine Bonham, Assistant Director; Elizabeth Curda, Assistant Director; Jack Warner, Assistant Director; Kevin J. Conway, Fred Evans, Benjamin Licht, and Chelsa Gurkin made significant contributions to this report. | Pursuant to various statutes, federal agencies develop an abundance of performance, budget, and financial information that could be useful for Congress' review and monitoring of agencies. However, agencies' understanding of Congress' information needs is often limited and agencies may not be providing timely information in a format that aids congressional understanding of trends and issues. Thus, Members and their staff may not be aware of or avail themselves to certain information. To describe the information available and how it might be used to support congressional oversight, the Federal Aviation Administration was selected as a case study in part due to the large quantity of information already available. GAO was asked to identify: (1) information FAA produces that could enhance congressional oversight, (2) other technology and information resources that could enhance congressional oversight, and (3) how committee access to FAA's information could be improved to enhance its timeliness and usefulness. The Federal Aviation Administration (FAA) has made available much of the information and analytic resources that Congress needs to carry out its oversight function. For example, FAA has a strategic plan with long-term, outcome-oriented goals and objectives. Its annual Performance and Accountability Report includes the agency's progress in achieving its goals, and allows Congress to monitor performance trends. This report also provides financial information useful for analyzing its operating results and financial position. FAA's budget documents combined with performance data could provide Congress information to use in determining whether resources are achieving the planned performance improvements. Used together, this information could assist Members of Congress and congressional staff in their oversight responsibilities. Through its legislative support agencies--GAO, Congressional Research Service and the Congressional Budget Office--and the Department of Transportation's (DOT) Inspector General (IG), congressional committee staff also have access to considerable resources for oversight. For example, GAO's 2005 High Risk Series Update includes FAA's Air Traffic Control Modernization program and discusses progress the agency has made in addressing its problems. DOT's IG annually reports on the top management challenges facing FAA, such as safety and capacity challenges. Effective communication is needed to ensure that information agencies provide meets congressional needs. While considerable information resources are available, they may not be available in a manner that is useful to committees. We have reported that although agencies collect and produce a great deal of information, much of it did not reach the interested committees, and the information that did reach them was difficult to digest, highly aggregated, or was received too late to be useful. In the case of FAA, House Transportation and Infrastructure Committee staff said FAA has a large quantity of information available and effective communication between the staff and agency, but is interested in using technology to gain additional agency data. While FAA provides a great deal of information on its Web site, it could take additional advantage of technology to improve the timeliness and usefulness of information to the Congress. For example, a Frequently Asked Questions section could provide quick access to information often requested by committees. As a result of our discussions with committee and agency staff, FAA has initiated two suggested technology enhancements, a For Congress page on its Web site, providing a single point of access for information relevant for oversight, and a Web site subscription service notifying committee staff when relevant information has been updated on its Web site. Further, regular meetings between congressional committees and agency officials could identify the committee's oversight objectives, provide a forum to discuss the issues, and develop approaches to meet them. Importantly, these findings constitute lessons learned that may be transferable to other agencies. |
In 1995, the Congress passed the ICC Termination Act, which abolished the Interstate Commerce Commission (ICC) and created the Board. The act transferred many of ICC’s core rail functions to the Board, including the responsibility to review and approve railroad mergers. The Board has exclusive jurisdiction to review proposed rail mergers, and if approved by the Board, such mergers are exempt from other laws (including federal antitrust laws that would otherwise apply to the transaction) as necessary to carry out the transaction. The Board also conducts oversight of mergers that have been approved. However, there is no statutory requirement for merger oversight. ICC had approximately 400 employees in 1995, its last year of operation. For fiscal year 2001, the Board received an appropriation to support 143 employees. In October 2000, the Board proposed modifications to its regulations governing major rail consolidations. According to the notice of proposed rulemaking, the Board recognized that current merger regulations are outdated and inappropriate for addressing future major rail mergers that, if approved, would likely result in the creation of two North American transcontinental railroads. In June 2001, the Board adopted final regulations governing proposed major rail consolidations. The final regulations recognize the Board’s concerns about what the appropriate rail merger policy should be in light of a declining number of Class I railroads, the elimination of excess capacity in the industry, and the serious service problems that have accompanied recent rail mergers. The final rules substantially increase the burden on applicants to demonstrate that a merger is in the public interest, in part by providing for enhanced competition and protecting service. The rules also establish a formal annual oversight period of not less than 5 years following a merger’s approval. The Board is responsible for approving railroad mergers that it finds consistent with the public interest. When necessary and feasible, conditions are imposed by the Board to mitigate any potential harm to competition. Oversight is designed to ensure that merger conditions have been implemented and that they are meeting their intended purpose. In determining, under the ICC Termination Act of 1995, whether proposed mergers are consistent with the public interest, the Board is required to consider a number of factors that relate to competition. These include the effect of a proposed transaction on the adequacy of transportation to the public; the effect on the public interest of including, or failing to include, other rail carriers in the area involved in the proposed transaction; and the impact of the proposed transaction on competition among rail carriers in the affected region or in the national rail system. The act also establishes a 15-month time limit for the Board to complete its review of accepted applications for mergers between Class I railroads and reach a final decision. Since the Board was created, two applications for merger between Class I railroads have been submitted—Conrail’s acquisition by CSX and Norfolk Southern and Canadian National/Illinois Central—both of which were approved. The Board also approved the Union Pacific’s acquisition of Southern Pacific, an application that had originally been submitted to ICC. During the merger review process, the Board considers comments and evidence submitted by all interested parties, which, together with the application, form the record upon which the Board bases its decision. The applicants as well as interested parties may submit information on the potential public benefits and potential harm of a proposed merger. Public benefits can include such things as gains in a railroad’s efficiency, cost savings, and enhanced opportunities for single-line service. Potential harm can result from, among other things, reductions in competition and harm to a competing carrier’s ability to provide essential services—that is, services for which there is a public need but for which adequate alternative transportation is not available. Whenever necessary and feasible, the Board imposes conditions on mergers that it approves so as to mitigate potential harm associated with a merger, including harm to competition. In determining whether to approve a merger and to impose conditions on its approval, the Board’s concern has focused on the preservation of competition and essential services— not on the survival of particular carriers or enhancing competition. Board officials told us that, while the Board’s efforts to preserve competition have primarily focused on maintaining competitive options for those shippers that could face a reduction in service from two railroads to service by only one railroad, competition that is the result of having two “nearby” railroads has also been preserved. Conditions can include such things as trackage rights, switching arrangements, access to another railroad’s facilities or terminal areas, or divestiture of lines. For example, in the UP/SP merger, the Board granted about 4,000 miles of trackage rights to the Burlington Northern and Santa Fe Railway (BNSF) to address competition-related issues for those rail corridors and shippers that could have potentially faced a reduction in service from two railroads (UP and SP) to service by only one railroad (UP). (See fig. 1.) The Board may also impose privately negotiated settlement agreements as conditions to mergers. The Board will normally impose conditions only when a merger would produce effects harmful to the public interest (such as a significant reduction in competition) and the condition will ameliorate or eliminate these harmful effects. In addition, a condition must be operationally feasible, produce net public benefits, and be tailored to address the adverse effects of a transaction. If a merger is approved, the Board has broad discretion to impose oversight conditions, as well as flexibility in how it conducts oversight. Such oversight conditions establish the Board’s intent to monitor a merger’s implementation and to conduct annual oversight proceedings (called formal oversight in this report). An oversight condition may also establish a time period during which the Board will monitor the effects of a merger. Although oversight conditions are not necessary for the Board to retain jurisdiction over a merger—particularly with regard to carrying out conditions the Board has imposed—oversight conditions ensure that the Board’s retained jurisdiction will be meaningfully exercised and gives parties an added opportunity to demonstrate any specific anticompetitive effects of a merger. According to the Board, oversight also (1) permits the Board to target potential problem areas for the subsequent imposition of additional conditions if this proves warranted in light of experience, (2) puts applicants on notice that they consummate the transaction subject to reasonable future conditions to mitigate harm in limited areas, and (3) helps to ensure cooperation by the merging carriers in addressing problems and disputes that may arise following merger approval. As such, oversight provides an additional check that Board-approved mergers are in the public interest. When an oversight period ends, the Board has stated that it continues to retain jurisdiction and can reopen a merger proceeding, if necessary, to address concerns pertaining to competition and other problems that might develop. Board officials described postmerger oversight as a process consisting mainly of an annual oversight proceeding. This proceeding is an examination of the implementation of merger conditions and whether conditions have effectively met their intended purpose. Oversight is generally conducted each year for 5 years after a merger has been approved. As part of the oversight proceeding, public comments and supporting information are formally submitted into the record by shippers, carriers, and other interested parties. Periodic progress reports, which provide, among other things, details on the implementation of conditions, are also submitted by merging railroads as required. Board officials told us that reporting requirements are frequently used as part of oversight and that such reporting has served to replace the industry and merger monitoring once conducted by ICC’s field staff. As an adjudicatory body, the Board relies on parties affected by a merger to identify whether a proposed transaction has harmed competition and, if so, to what extent; the Board does not independently collect this type of information. Board officials noted that it has been standard practice in merger oversight to require relevant railroads, such as UP and BNSF in UP/SP oversight, to make available under seal to interested parties the railroads’ confidential 100 percent traffic tapes—tapes that include information such as shipments moved and freight revenue generated—so that parties other than the merging carriers would also have the opportunity to submit postmerger rate analyses to the Board. As part of the oversight process, the Board may consider information obtained from monitoring industry operations, such as service levels, as well as any studies conducted, whether specific to that merger or industrywide. In conducting formal oversight, the Board may modify existing conditions if they are not achieving their intended purpose or may impose additional reporting requirements if necessary. The Board also has the authority to initiate a new proceeding to determine if additional conditions should be imposed to address unforeseen merger-related issues. Board officials noted that the agency engages in other activities associated with oversight. Included are such things as informal monitoring of merging railroads’ operations and service performance and responding to certain filings, such as petitions to clarify or modify a merger condition based on competition-related issues or other claims of merger harm. Although the Board retains some form of oversight jurisdiction for all rail mergers, the use of formal merger oversight has become standard only since the mid-1990s. Board officials told us that before 1995, formal postapproval oversight of mergers was rare and was instituted only in unusual situations when strong concerns about competition were present. These officials pointed to only two cases when a period of formal oversight was imposed prior to 1995: once in 1984 in a rail/barge merger between CSX Corporation and American Commercial Lines, Inc., and in 1992 as part of the merger of Wisconsin Central Transportation Corporation and Fox Valley & Western, Ltd. Neither case involved the merger of two or more Class I railroads. In both cases, however, oversight conditions were imposed in response to concerns raised about potential harm to competition. In recent years, in light of the complexity of transactions and the service and competitive issues that have arisen, the Board has expanded its use of formal oversight of railroad mergers. ICC did not impose specific oversight conditions on its approval of the 1995 Burlington Northern and Santa Fe Railway merger because, according to Board officials, there were few concerns raised in that merger about service issues or potential harm to competition. Since August 1995, when the BNSF merger was approved, the Board has imposed oversight on all three Class I railroad mergers that it has approved: the 1996 UP/SP merger, the 1998 Conrail acquisition by CSX and Norfolk Southern, and the 1999 Canadian National/Illinois Central merger. For two of the three transactions (UP/SP and Conrail), the oversight period was set for 5 years. In the third merger—Canadian National and Illinois Central—a 5-year oversight period was established with continuation to be reviewed annually. All three oversight periods are ongoing. The Board has significant discretion and flexibility to adapt its oversight as circumstances warrant. For example, in conducting oversight in recent years, the Board has, when necessary, incorporated additional monitoring elements to supplement its oversight activities. For example, it has added more reporting requirements. The UP/SP merger provides a good illustration of service monitoring. As the result of a service crisis that developed during the implementation of this merger, the Board required both UP/SP and BNSF to provide weekly and monthly reports to its Office of Compliance and Enforcement—information which, according to Board officials, had never been available before. These reports included statistics on such things as average train speed, cars on line, and terminal dwell time—the time loaded railcars spend in a terminal awaiting continued movement. This information allowed the Board to monitor the operations and service levels of both railroads. Similar reporting requirements were imposed on both CSX and Norfolk Southern in the Conrail merger. In this instance, the Board, anticipating possible transitional service problems during the integration process, required the weekly and monthly reports both to monitor the merger’s implementation and to identify potential service problems. Board officials told us that as a result of the lessons learned in the UP/SP merger, oversight has expanded to incorporate monitoring of operational and service issues—in part to serve as an early warning of problems that might occur during the merger integration process. Future mergers will also be subject to operational monitoring. The merger rules adopted by the Board in June 2001 state that the Board will continue to conduct significant postapproval operational monitoring of mergers to insure that service levels after a merger are reasonable and adequate. In general, the Board has found few competition-related problems when conducting oversight of recent mergers but has acted to modify some conditions designed to address such problems when it felt such action was necessary. Even though many of the shipper and railroad trade associations told us that the oversight process is valuable, some shippers and small railroads are dissatisfied with aspects of the Board’s oversight. In addition, some larger carriers are concerned that shippers are using the oversight process to address issues not related to mergers. The Board’s recently adopted merger rules could affect oversight by changing the focus of merger approval toward enhancing rather than preserving competition. A review of oversight decisions in recent merger cases shows that the Board has found few problems related to competition. Board officials also told us they believe that, to date, the conditions originally imposed on mergers have met their intended purpose and have mitigated any potential harm to competition. In determining whether to modify a condition, the Board reviews the evidence presented, considers the nature and extent of the alleged harm, and assesses what action may be warranted. In general, the Board has not found it necessary to modify or add conditions during oversight of recent mergers. However, the Board has found such action to be appropriate in some cases. For example, in December 1998, the Board added a condition and modified a condition in the UP/SP merger. The added condition addressed traffic congestion in the Houston/Gulf Coast area; the modified condition changed the location where BNSF railcars are transferred to another railroad. Similarly, in 1998 and 1999, the Board modified four conditions in the Conrail transaction. These modifications were designed to preserve competition by, among other things, introducing a second carrier and requiring carriers to negotiate an acceptable transfer point to interchange railcars bound for an Indiana power plant. Providing specific evidence of harm to competition is critical in obtaining additional Board relief. According to the Board’s decisions, shippers and others have sometimes alleged harm to competition during oversight without presenting specific evidence of such harm. For example, as part of the UP/SP merger, the Board granted over 2,100 miles of trackage rights to BNSF on the Central Corridor to preserve competition for those shippers that could have been reduced from service by two carriers (UP and SP) to service by only one (the merged UP/SP) and for those exclusively served shippers who benefited from having another railroad nearby. Some organizations have asserted that, despite the trackage rights, postmerger competition has not been adequate on this corridor. However, in its UP/SP oversight decisions, the Board has concluded that postmerger competition on this corridor has been adequate, in part because no shippers came forward with specific evidence of harm. In another instance, in the Conrail merger, the Board granted trackage rights to Norfolk Southern to access a power plant in Indiana. In order to use the trackage rights, Norfolk Southern negotiated a fee with CSX. The power plant owner believed that the negotiated fee was too high to allow adequate competition between the railroads and requested a lower fee so that Norfolk Southern could compete for its business. In denying this request, the Board stated that the evidence of harm presented was not sufficient, in part because both CSX and Norfolk Southern demonstrated that the negotiated fee would amount to only a minimal cost increase ($0.004 per ton) over the amount the Board had previously found to be reasonable. A review of merger oversight documents shows the Board has acted to address competition-related postmerger issues when it believed such action was necessary. For example, during oversight of the Conrail acquisition, the Board reduced fees for trackage rights and switching charged to Canadian Pacific to permit competition between CSX and Canadian Pacific Railway in the Albany, New York, to New York City corridor. Although the Board had initially set these fees in a postmerger decision, the Board later determined that the fees were too high to allow Canadian Pacific to use CSX tracks to provide meaningful competition between the carriers. Consequently, the Board acted to reduce the fees to promote competition. The Board also acted during the Conrail oversight period to void provisions in two contracts between CSX Intermodal, Inc., a rail shipper, and Norfolk Southern that required Norfolk Southern to be the primary carrier of CSX Intermodal goods between northern New Jersey and Chicago during the contract period. Voiding these provisions allowed CSX immediately to compete with Norfolk Southern for these shipments. Shipper and railroad trade associations and railroad companies with whom we spoke believe postmerger oversight is a valuable process. Officials from the National Grain and Feed Association and the National Industrial Transportation League told us that the Board has always been willing to listen to their concerns. Officials from Norfolk Southern and BNSF said the merger oversight process provides shippers and railroads with an opportunity to submit merger-related questions, problems, and concerns. Railroad and railroad association officials stated that the Board acts to protect the interests of the public and the shipping community by allowing railroads and shippers to work together during oversight to resolve actual and potential merger-related problems. Officials from one trade association said that without an oversight process, their members might be faced with a less desirable alternative. For example, officials from the American Chemistry Council told us that the only other option for shippers would be to use the Board’s time-consuming and expensive complaint process. Officials from the American Chemistry Council, as well as officials from UP and BNSF, said a 5-year oversight period has been a benefit to both railroads and shippers. However, an American Chemistry Council official said some mergers may need oversight for a longer or shorter period than 5 years and that it is unclear what type of oversight will occur after the 5-year oversight period for the UP/SP merger expires in 2002. Despite seeing oversight as a valuable process, some shipper and small railroad associations are dissatisfied with aspects of the Board’s oversight procedures. A number of reasons were cited. The Board has been viewed as unresponsive to concerns of shippers and small railroads. For example, an official representing the Edison Electric Institute told us that it had expressed concern to the Board in 2000 about the degree of competition for the transport of Utah and Colorado coal in the Central Corridor, but that the Board declined to answer questions about this issue. An official from the American Chemistry Council expressed similar frustration that the Board did not adopt any part of a plan developed by shippers and others to address the Houston/Gulf Coast service crisis that occurred during the implementation of the UP/SP merger. This plan had broad support from both private sector and state government officials.Dissatisfaction was also expressed about the time and resources required for preparing and submitting comments during the postmerger oversight period, especially for small shippers. For example, officials from the Edison Electric Institute and the American Chemistry Council told us that small shippers might not have the time or the money to invest in the formal oversight process. Finally, officials from several shipper associations and the American Short Line and Regional Railroad Association (an association representing smaller railroads) said their members are discouraged from participating in the oversight process, in part because of the reasons cited above. Although generally satisfied with the Board’s oversight process, officials at some Class I railroads have cited certain drawbacks to it. For example, officials at Norfolk Southern, CSX Transportation, and UP said some shippers use the formal oversight process as a mechanism to raise non- merger-related issues, which they claim have protracted the oversight process. Railroad officials told us that inviting comments by interested parties allows them to reintroduce issues that were initially denied during the merger approval process. They noted that, as a result, they must invest their time to address non-merger-related issues. Officials with Norfolk Southern said that if the Board allows parties to reintroduce issues already decided, this could delay implementation of a merger. Board officials told us that oversight is an open process and anyone can submit comments. The basis for making decisions is the merger and postmerger oversight record and Board officials said they encourage parties such as shippers, railroads, and others to submit information into the record so that the Board can act with as much information as possible. However, Board officials acknowledged that parties sometimes reargue issues during oversight that were not decided in their favor in the merger decision. For example, in its November 2000 oversight decision in the Canadian National/Illinois Central merger, the Board refused to require that Canadian National sell its share of the Detroit River Tunnel as requested by various parties. The parties were concerned that Canadian National would competitively disadvantage the Detroit River Tunnel by not allowing needed capital investments to be made and favoring another nearby tunnel it owned. The Board found that this issue was not directly related to the merger and was a matter being privately negotiated between the parties. Finally, Board officials have said the oversight process has evolved over time and the Board has incorporated additional reporting and other requirements to provide more information on actual and potential problems experienced during merger implementation. Moreover, the Board has focused on preserving, not enhancing, competition and does not seek to restructure the competitive balance of the railroad industry during postmerger oversight. Both shipper association and railroad officials with whom we spoke recognized that the Board has a limited number of staff to conduct formal oversight. According to officials from the American Short Line and Regional Railroad Association, the Board’s perceived slowness in handling oversight issues may be attributable to the significant amount of information that needs to be processed during the annual oversight proceeding—information that is generally handled by a core team of 15 employees (who, Board officials noted, also work on agency matters other than mergers). Board officials acknowledged that their resources are limited. However, they said oversight offers an open, no-fee process in which any interested party may participate. They also said the Board has issued in a timely manner its decisions in the annual oversight proceeedings, as well as in matters involving specific material issues during oversight. The rail consolidation rules issued in June 2001 could change how the Board conducts oversight by providing for merger applications to include plans to enhance competition and to ensure reasonable service and by holding applicants accountable if they do not act reasonably to achieve promised merger benefits. Shifting the focus of merger review towards enhancing competition and ensuring reasonable service, as well as including some degree of accountability for postmerger benefits, could require the Board to expend additional time and resources reviewing these issues. For example, the final rules would call upon merger applicants to enhance competition so as to offset any negative effects resulting from a merger, such as potential harm to competition and disruptions of service. This could affect the way the Board uses and oversees conditions during the merger approval and oversight processes. Similarly, to require railroads to calculate the net public benefits to be gained through a proposed merger and to hold them accountable for acting reasonably to achieve these benefits, such as improved service, the Board will monitor as part of the general oversight proceeding the realization of merger benefits claimed. These activities would enlarge the current focus of assessing whether conditions are working as intended. In the event that public benefits fail to materialize after a merger is approved, the Board said it would consider the applicant’s proposals for additional measures. It is not likely that the final merger rules will resolve all concerns expressed by shipper and railroad organizations about oversight. The final rules will not change the basic process established for oversight. While the final rules may address concerns of shippers and railroads about service levels by requiring merger applicants to develop service assurance plans, they will not address more general concerns that the Board is not responsive to their issues. Furthermore, the final rules will not likely address concerns about the time and resources necessary to participate in postmerger oversight. Rather, the amount of time and resources required could increase, given that during oversight the Board will assess enhancement of competition, service issues, and accountability for proposed merger benefits as well as whether conditions are working as intended. In addition, issues may continue to be introduced that are not directly related to the merger under review. Board officials said they do not consider participation in oversight to be an expensive or burdensome process. However, they acknowledged that the new merger rules would require applicants to provide more detailed information on competition, service, and benefits as part of the merger application and that the amount of time and resources required during oversight could increase. Finally, the final rules may also not address all of the shippers’ concerns about the extent of competition in the rail industry resulting from mergers. While provisions regarding the enhancement of competition may address some competition-related issues, it is not clear how these provisions will be implemented. Both shipper and railroad officials told us that enhanced competition had not been defined in the proposed rules and, therefore, they were not clear how the provisions might affect specific situations involving competition. The final rules acknowledge that the Board cannot predict in advance the type and quantity of competitive enhancements that would be appropriate in a particular merger proposal. Lastly, the new merger rules make clear that the Board will not use its authority to impose conditions during merger approval to provide a broad program of open access. We analyzed the effects of the 1996 UP/SP merger on rail rates in two selected geographic markets that have high concentrations of shippers that faced a reduction in service by two railroads to service by only one railroad (called 2-to-1 shippers). We found that the merger reduced rail rates for four of the six commodities we reviewed. However, in one instance, the merger placed upward pressure on rates, even though other factors caused overall rate decreases. For the remaining commodity, rates were relatively unchanged. Our analysis illustrates that the Board could make more informed decisions during oversight about whether merger conditions are protecting against harm to competition, as measured by the merger’s effect on rates, if it had information that separated rate changes specifically resulting from a merger from rate changes caused by other factors. A merger reduces the number of rail carriers and can potentially enhance the market power of remaining carriers. This enhanced market power could be used to profitably increase rail rates if no action were taken to preserve competition. Board officials told us that rate trends are a good indicator of postmerger competition. In 1996, UP acquired SP in a transaction that raised significant competition-related issues. This merger encompassed a number of geographic areas where the loss of competition from SP could have reduced the number of carriers from 2 to 1. Most of these areas were in Texas and Louisiana, but some were in the Central Corridor between California and Colorado. (See fig. 1.) In granting trackage rights to BNSF in this merger, the Board sought to replace the competition for potential 2-to-1 shippers in these geographic areas. To understand how the UP/SP merger affected rail rates, we looked at rail rates in two geographic areas—Reno, Nevada, and Salt Lake City, Utah— both in the Central Corridor. We selected these areas because they had high concentrations of potential 2-to-1 shippers and, according to BNSF and UP/SP officials, were less affected by the service crisis that developed during implementation of the UP/SP merger. They also provided relatively clear examples of where BNSF service substituted for SP service. The primary commodities shipped to and from Reno and Salt Lake City were nonmetallic minerals (such as barites) and chemicals (such as sulfuric acid or sodium). (See table 1.) Farm products (such as corn and wheat) accounted for about 13 percent of the traffic shipped to Salt Lake City. We also included coal in our analysis of Salt Lake City rail rates, since it accounted for the highest percentage of carloads shipped to and from that area. However, BNSF officials told us that, in general, they have not yet used the trackage rights they were granted to transport coal to or from the Salt Lake City area. In its decision approving the UP/SP merger, the Board noted that BNSF was granted access to only a small portion of coal traffic on the Central Corridor, mostly in the northwestern section of Utah. As the table shows, the potential 2-to-1 shippers served by BNSF, as a percentage of total shippers in these geographic areas, ranged from 10 to 22 percent. This is consistent with comments made by Board officials that BNSF received trackage rights to serve about 20 percent of the postmerger UP/SP traffic on the Central Corridor. Our analysis found that by itself the merger would have served to reduce rates for four of the six commodities shipped to or from the geographic areas we chose. (See table 2.) Specifically, the merger would have reduced rates for coal shipments to and from the Salt Lake City area (by 8 percent and 10 percent, respectively), chemical shipments from the Salt Lake City area (by 6 percent), and farm products to the Salt Lake City area (by 5 percent). However, the rates for shipments of chemicals to the Reno area would have increased by 21 percent because of the merger, while rates for shipments of nonmetallic minerals originating in the Reno area would have been relatively unchanged by the merger (i.e., the merger-related change was not statistically significant). The effect of a merger on rail rates depends on the cost savings the merger might generate relative to the exercise of any enhanced market power by the railroad carriers. Since the Board acted to preserve the level of competition by granting trackage rights to BNSF to serve potential 2-to-1 shippers in these geographic areas, the rate decreases from the merger likely reflect cost savings from the consolidation. Another way in which the merger could result in lower rates is if BNSF provided more effective competition to UP in the postmerger period than SP did in the premerger period. While the effects of a merger can put downward (or upward) pressure on rates, an analysis focused on overall rate changes alone could lead to an inaccurate conclusion about whether conditions imposed on a merger to mitigate potential harm to competition have been effective. The results of our analysis indicate that, in addition to merger effects, other factors, such as the volume of shipments, had an equal or greater influence on overall rate changes for the specific movements we examined. In some cases, the effects of these other factors were strong enough to offset or even reverse the downward pressure of the merger on rates. (See table 2.) For example, for shipments of chemicals from the Salt Lake City area and for shipments of coal to and from the Salt Lake City area, while the merger alone would have decreased rates, the rates nevertheless increased overall. On the other hand, while rates decreased overall for chemicals shipments to the Reno area, the merger by itself put an upward pressure on rates. Finally, we found that postmerger rates for potential 2-to-1 shippers (served by BNSF) in the Reno and Salt Lake City areas decreased for one of the commodities we looked at but were essentially unchanged in three other instances. (See table 3.) The rate changes for potential 2-to-1 shippers (served by BNSF) shipping chemicals from the Salt Lake City area were about 16 percentage points less than similar rates for shippers shipping similar products but served solely by UP. However, rail rate changes for potential 2-to-1 shippers (served by BNSF) who shipped farm products to the Salt Lake City area, nonmetallic minerals from the Reno area, and chemicals to the Reno area were all higher than for shippers served exclusively by UP, but this difference was not statistically significant, meaning that the rates were essentially unchanged. These results are not wholly unexpected, since the levels of rail competition for the two kinds of shippers—potential 2-to-1 and non-2-to-1—differ and rail rates are set using differential pricing. Under differential pricing, shippers with less effective transportation alternatives generally pay a proportionately greater share of a railroad’s fixed costs than shippers with more effective transportation alternatives. There are limitations in the analysis and data we used. The results presented are only for the two geographic markets we reviewed and cannot be generalized to other geographic locations or for rate changes from the UP/SP merger as a whole. In addition, although econometric models of the factors that determine rail rates have been used to analyze a variety of policy-related issues in rail transportation and have been useful, such a model can be sensitive to how it is specified. We tested the model’s key results to ensure that our findings were reliable and are confident that the results are reasonable for the commodities in the geographic areas we examined. Finally, the Carload Waybill Sample data used in our model also have limitations. For example, these data do not necessarily reflect discounts or other rate adjustments that might be made retroactively by carriers to shippers exceeding certain volume requirements. Our analysis provides an example of how rates subject to merger conditions could be analyzed. Although the results in this study are not directly comparable to those in other studies of rates that are based on broader geographic areas, our analysis suggests that overall rate changes do not identify the specific impact of mergers on rates. In general, the Board has been presented with rate studies that have focused on overall rate changes, not on the portion of changes caused by a merger. For example, rate studies prepared by UP during merger oversight indicate that, overall, rates decreased immediately after the merger and have continued to decrease at 2-to-1 points and for traffic moving in the Houston-Memphis and Houston-New Orleans corridors. Similarly, both CSX and Norfolk Southern have conducted studies of rail rates in the Buffalo, New York, area since their acquisition of Conrail in 1999. Again, these studies have focused on the overall direction of rate changes and have shown that rail rates in the Buffalo area have generally decreased. Neither the UP nor the CSX/Norfolk Southern rate studies identified the specific effects of mergers on rates—effects that could have potentially been different from the overall rate trends. According to Board officials, in general, the parties in merger oversight proceedings have focused on determining the overall magnitude and direction of rate changes without trying to relate such changes to specific causes, and the Board’s own December 2000 staff study of nationwide changes in rail rates took this approach. Board officials said they have attempted to take into account, in the context of postmerger oversight, such non-merger-related factors as the recent significant rise in diesel fuel prices but have not been presented with an econometric approach to analyze rail rates in the context of merger oversight. They said that they had questions and concerns about the precision and reliability of the analysis we conducted. However, the Board is amenable to seeing this general approach developed in the context of a public merger oversight record where it would be subject to scrutiny and refinement by relevant parties. Board officials noted that presenting and rebutting econometric studies, because of their sophisticated nature, could increase the burden of participating in the merger oversight process. It is important to note that the Board, in approving the UP/SP merger, was provided with various empirical rate studies by the applicants and interested parties that included econometric analyses. In addition, econometric evidence has played an important role in merger-related cases that have been reviewed by courts and other government agencies. As an adjudicatory agency, the Board relies on affected parties to identify alleged harm when it exercises oversight to ensure that conditions imposed in railroad mergers are working and that competition has not been harmed. Therefore, it is necessary for shippers, railroads, or others not only to identify instances when they have been, or might be, harmed, but also to present evidence to the Board demonstrating this harm. For the Board to make sound decisions about the extent to which mergers affect rate changes, the Board should have information that separately identifies the factors that affect rates and the specific impact of these factors. Without such information, the Board’s ability to evaluate whether merger conditions have been effective in protecting against potential harm to competition may be limited. To better assist the Board in the oversight of railroad mergers and in ensuring that conditions imposed in such mergers protect against potential harm to competition, we recommend that the Board, when appropriate, require railroads and others to provide information to the Board that separately identifies the factors affecting postmerger changes in rail rates and the specific impact of these factors on rate changes. In particular, the Board, when appropriate, should require railroads and others to provide information that identifies the effects of mergers on changes to rail rates, particularly in those geographic areas subject to potential reductions in competition. This information should be considered in deliberations on the need to modify conditions, add reporting requirements, or initiate proceedings to determine if additional conditions are required to address competition-related issues. We provided a draft of this report to the Surface Transportation Board and the Department of Transportation for their review and comment. The Board did not express an overall opinion on the draft report, but rather supplied suggested revisions to it. Most importantly, while the Board is amenable to seeing an econometric approach developed in the context of a public oversight record, it commented that such an approach could increase the burden of the parties participating in the merger oversight process. This increased burden might occur because of the effort entailed to develop, present, and rebut econometric studies. We agree that an increased burden might occur and incorporated this view into our report. Allowing parties to critique the usefulness of our recommendation and the effort involved in implementing it should provide the Board with the information it needs on implementation. The Board offered extensive clarifying, presentational, and technical comments which, with few exceptions, we incorporated into our report. The Department of Transportation did not express an overall opinion on the draft report. Its comments were limited to noting that several Class I railroads were under common control. We incorporated this change into our report. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after the date of this letter. At that time, we will send copies of the report to congressional committees with responsibilities for transportation issues; the Secretary of Transportation; the Acting Administrator of the Federal Railroad Administration; the Chairman of the Surface Transportation Board; and the Director, Office of Management and Budget. We will also make copies available to others upon request. This report will also be available on our home page at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834. Key contributors to this report were Stephen Brown, Helen Desaulniers, Leonard Ellis, John Karikari, Tina Kinney, Richard Jorgenson, Mehrzad Nadji, Melissa Pickworth, James Ratzenberger, and Phyllis Scheinberg. August 16, 1995 35,400 Western United States and Canada $1.3 billion, plus assumed liabilities Largely end-to-end. However, in approving this merger, ICC found that of the approximately 29 locations that were served by both railroads, only a few would have potentially sustained harm from reduced competition given the presence of other railroads and of extensive truck competition at many of the locations. Conditions were attached to preserve competition where necessary. August 6, 1996 38,654 Western United States $3.3 billion in cash and stock, plus assumed liabilities Significant parallel components. In approving this merger, the Board granted about 4,000 miles of trackage rights to BNSF and other railroads to protect potential 2-to-1 shippers and others from loss of competition. No Class I merger transactions. About 21,800 Eastern United States and Canada $9.9 billion, plus assumed liabilities and fees Largely end-to-end. Although CSX Corporation and Norfolk Southern Corporation jointly acquired Conrail and then divided most of the assets between them, Conrail continues to operate certain shared assets areas for the joint benefit of CSX and Norfolk Southern. These shared assets areas are located in North Jersey (generally from northern New Jersey to Trenton, New Jersey), South Jersey/Philadelphia (generally from Trenton, New Jersey, to Philadelphia and southern New Jersey), and Detroit. Both CSX and Norfolk Southern have the right to operate their own trains, with their own crews and equipment and at their own expense, over any track included in the shared assets areas. Various other areas formerly operated by Conrail are subject to special arrangements that provide for a sharing of routes or facilities to a certain extent. For example, the Monongahela Area in Pennsylvania and West Virginia, although conveyed to Norfolk Southern, is available to CSX on an equal-access basis for 25 years, subject to renewal. May 21, 1999 18,670 Midwestern United States and Canada $1.8 billion, plus the value of 10.1 million common shares of Canadian National stock End-to-end. No Class I merger transactions. No Class I merger transactions proposed through June 2001. Our review focused primarily on the Board’s oversight of Class I railroad mergers that occurred since its creation in January 1996. These mergers included (1) the Union Pacific Railroad Company (UP) with the Southern Pacific Transportation Company (SP), (2) the Canadian National Railway Company with the Illinois Central Railroad and (3) the acquisition of the Consolidated Rail Corporation (Conrail) by CSX Transportation, Inc., and the Norfolk Southern Corporation. However, to aid in showing how merger oversight has changed over time, we also included information on the Burlington Northern Railroad Company merger with the Atchison Topeka and Santa Fe Railway Company, which was approved by ICC in August 1995. To address the role of the Board in approving and overseeing railroad mergers and to determine how merger oversight is conducted, we reviewed relevant laws and regulations and analyzed documents prepared by the Board addressing its merger authority and functions. We also discussed with the Board’s staff how merger oversight is conducted and how such oversight has changed over time. In addition, we discussed with the Board’s staff the activities conducted as part of formal oversight—that is, activities included in an annual general oversight proceeding—as well as informal oversight activities (such as monitoring of railroad performance data) associated with mergers. To address how the Board acts to mitigate potential merger-related harm to competition, we reviewed documents contained in its merger dockets, including merger approval and oversight decisions and progress reports filed by merged railroads. We discussed with Board officials how oversight of conditions is conducted and the factors considered by the Board in determining if conditions imposed have been effective in mitigating potential harm to competition. We also discussed oversight issues with various trade associations representing shipper and railroad interests as well as with officials from Class I railroads. (The organizations we contacted are listed at the end of this app.) The shipper trade associations represented major commodities shipped by rail. Finally, to identify how merger oversight might change in the future, we reviewed the Board’s notice of proposed rulemaking on major rail consolidations published in October 2000 and the final regulations issued in June 2001. We discussed with the Board how the final merger rules differed from the proposed rules. To address how the UP/SP merger affected rail rates in selected geographic areas, we obtained data from the Board’s Carload Waybill Sample for the years 1994 through 1999. The Carload Waybill Sample is a sample of railroad waybills (in general, documents prepared from bills of lading authorizing railroads to move shipments and collect freight charges) submitted by railroads annually. We used these data to obtain information on rail rates charged by different railroads for specific commodities in specific markets subject to potential reduction in competition in the UP/SP merger. We focused on this merger because it was identified by the Board as having significant competition-related issues, especially in the number of shippers potentially going from service by two railroads to service by only one railroad (called 2-to-1 shippers). Using documents submitted by the Union Pacific Railroad, as well as discussions with officials from both the Union Pacific Railroad and the Burlington Northern and Santa Fe Railway, we identified those locations and corridors containing the majority of potential 2-to-1 shippers. Using economic areas defined by the Department of Commerce’s Bureau of Economic Analysis, our analysis focused on those economic areas containing the majority of these potential 2-to-1 shippers. We used the Carload Waybill Sample instead of more specific data on rates for individual shippers because of the lack of sufficient premerger rate data from SP’s operations. Although it is possible to get rates for 2-to-1 shippers from the Carload Waybill Sample, the sample is not designed for use in analyzing rates for specific shippers. However, the sample can be used to analyze rail rates within and between geographic areas. For these reasons, we used economic areas containing a majority of potential 2-to-1 points in conjunction with the Carload Waybill Sample to conduct our analysis. The rate data obtained from the Carload Waybill Sample were then used in an econometric model that analyzed the effects of the UP/SP merger on changes to rail rates for various commodity shipments to and from the economic areas with the majority of potential 2-to-1 shippers. A detailed description and discussion of this model can be found in appendix III. Some railroad movements contained in the Carload Waybill Sample are governed by contracts between shippers and railroads. To avoid disclosure of confidential business information, the Board provides for railroads to mask the revenues associated with these movements prior to making this information available to the public. We obtained a version of the Carload Waybill Sample that did not mask revenues associated with railroad movements made under contract. Therefore, the rate analysis presented in this report presents a truer picture of rail rates than analyses that are based solely on publicly available information. There are also limitations associated with data from the Carload Waybill Sample. For example, according to Board officials, revenues derived from this sample are not adjusted for such things as year-end discounts and refunds that may be provided by railroads to shippers that exceed certain volume requirements. However, both Board and railroad officials agreed that, given the lack of sufficient premerger SP data, the Carload Waybill Sample was the best data source available for conducting our analysis. We performed our work from July 2000 through June 2001 in accordance with generally accepted government auditing standards. Burlington Northern and Santa Fe Railway Co. CSX Transportation, Inc. Norfolk Southern Corporation Union Pacific Railroad Co. This appendix describes and discusses our analysis of the effects of the 1996 UP/SP merger on rail rates in selected geographic areas where the merger had the potential for harm to competition because 2-to-1 shippers could have lost one of the two railroad carriers upon which they had relied. In particular, we discuss (1) the econometric model we developed to analyze separately the effects of the merger and of other factors on rail rates, (2) the construction of the data used for the analysis, and (3) our analysis, including a comparison of overall changes in rates, based on mean-difference analysis, with the results of the econometric model. We developed an econometric model to examine both the specific impact of the 1996 UP/SP merger and the impact of other factors on rates in selected geographic areas where competition could have been potentially reduced. In developing the model, we focused on the trackage rights granted to BNSF by the Board, and applied existing empirical literature on how rail rates are determined. The UP/SP merger covered areas where the services provided by UP overlapped those provided by SP. As a result, some rail shippers could have been reduced from being directly served by both SP and UP to being directly served by UP only. In order to preserve competition in those potential 2-to-1 situations and for those shippers exclusively served by UP or SP who benefited from having another independent railroad nearby, the Board granted trackage rights to BNSF in order to replace the competition that would be lost when SP was absorbed by UP. As done in previous studies, we use an econometric model to identify the factors affecting rail rates following the UP/SP merger—rail rates being the dependent variable used in the model. Rail Rates: We measured rail rates—the freight rate charged by a railroad to haul a commodity from an origin to a destination—by revenue per ton- mile, adjusted for inflation. We used data from 1994 and 1995 for the premerger period, and data from 1997 through 1999 for the postmerger period. We excluded 1996 data, since the UP/SP merger was approved in August 1996. We also excluded shipments with rail transportation charges less than $20,000 (in 1996 dollars) in order to focus on the major movements. The level of each observation was shipments at the 7-digit Standard Transportation Commodity Code—a classification system used to group similar types of commodities such as grains—between an origin and a destination. The factors that explained the rail rates were generally those related to market structure and regulatory conditions, as well as cost and demand factors. Market Structure and Regulatory Conditions: We included the variable MERGER to capture the effect of the merger on rates. The extent of rail competition is expected to affect rail rates. We used a variable that would reflect the difference in rates charged to shippers with competitive options—SP and UP before the merger, and BNSF and UP afterwards— and shippers served solely by one railroad both before and after the merger to capture the influence of this fact on rates. The variable is RAILROAD-BNSF. Cost and Demand Factors: These factors are generally captured by the shipment and shipper characteristics of the traffic. As in previous studies, we use the following variables to measure the influence of cost and demand factors: variable cost per ton-mile (COST), the weight of shipments (TON), the length of haul (DISTANCE), the annual tonnage shipped between an origin-destination pair (DENSITY), and OWNERSHIP of railcars. In addition to the explanatory factors mentioned above, we included the following factors: First, we introduced a variable for contract rates (CONTRACT) to account for possible differences between contract rates and noncontract rates. Second, we included a variable to account for the possible effects of the service crisis that arose after the merger and lasted through 1998 (CRISIS). Third, following previous studies, we included the squared terms for the variables TON (TON_SQ) and DISTANCE (DISTANCE_SQ), to account for possible nonlinear relationships between these variables and rates. We also included dummy variables for the major commodity groups (COMMODITY) where appropriate. We selected geographic markets that had high concentrations of potential 2-to-1 shippers because of the possibility for harm to competition in those areas. Using the Carload Waybill Sample, we performed several data- processing tasks that included matching similar sets of traffic before and after the merger, and selecting the primary commodities that were shipped, based on carloads, for analysis. All the data used for the study were constructed from the Carload Waybill Sample, which is a sample of railroad waybills (in general, documents prepared from bills of lading that authorize railroads to move shipments and collect freight charges) that are submitted annually by the railroads.However, there are limitations in using the Carload Waybill Sample for rate analysis. Among these limitations is that no specific information is provided about the identity of the shippers. This makes it difficult to identify potential 2-to-1 traffic by shipper name. Also, data for rates for shipments moved under contract between railroads and shippers (called contract rates), which are masked or disguised in the Carload Waybill Sample, may be incomplete. We selected the Reno, Nevada, and Salt Lake City, Utah, business economic areas, which are in the Central Corridor and which had high concentrations of potential 2-to-1 shippers. Both SP and UP served these two areas prior to the merger; BNSF service was not available in the area at that time. Also, according to BNSF officials, the Central Corridor was relatively less affected by the service crisis that emerged after the UP/SP merger. In addition, UP fully integrated its computer and information systems with SP in the Central Corridor much earlier than in the other regions, making rate and other data there more reliable. However, there are limitations in using the Central Corridor to illustrate the possible effects of the UP/SP merger on rates. According to the Board, BNSF generally had problems ramping-up its trackage-rights service in the Central Corridor. Also, the Reno and Salt Lake City areas are not typical rail hubs, because the traffic to and from these areas is not high volume, compared with other areas, such as the Houston-Gulf Coast area. Despite these limitations, the two selected areas provide an opportunity to illustrate the impact of the UP/SP merger on rates in predominantly potential 2-to-1 situations. We performed several tasks to organize the Carload Waybill Sample for our analysis. We identified traffic by origin and destination, and at the 7- digit Standard Transportation Commodity Code level separately for periods before the merger and periods after the merger. We then matched similar sets of railroad traffic existing before and after the merger. The matching involved shipments that we could determine, on a commodity and origin-and-destination basis, that were made in both periods. To help identify traffic associated with BNSF’s trackage rights, we also identified the railroad carrier(s) associated with the shipments that we matched for both periods. There were two Class I railroads serving the two geographic areas before the merger (SP and UP). After the UP/SP merger, all the traffic belonging to SP and UP came under the merged UP’s sole control, except for potential 2-to-1 shippers and shippers that could take advantage of such provisions as build-in/build-out and new facilities conditions. As a result of the trackage rights imposed by the Board as part of the merger conditions, BNSF obtained access to the potential 2-to-1 traffic, regardless of whether the traffic had been carried by SP or UP prior to the merger. Our matching process was intended to identify this potential 2-to-1 traffic. The following matching was done in the following sequence: 1. SP premerger traffic was matched to BNSF postmerger traffic—this is BNSF trackage rights over SP (BNSF-SP). 2. UP premerger traffic was matched to BNSF postmerger traffic that is still unmatched—this is BNSF trackage rights over UP (BNSF-UP). 3. SP premerger traffic that was still unmatched was matched to UP postmerger traffic—this is UP traffic over SP (UP-SP). 4. UP premerger traffic that was still unmatched was matched to UP postmerger traffic that is still unmatched—this is UP traffic over UP (UP-UP). The BNSF-SP and BNSF-UP traffic (henceforth BNSF) consists of only potential 2-to-1 traffic that was served by SP or UP before the merger but served by BNSF in the postmerger period. The UP-SP and UP-UP traffic (henceforth UP) includes potential 2-to-1 traffic as well as non-2-to-1 traffic. However, according to UP officials, the latter traffic substantially comprises shippers that are served solely by one railroad because they could be served in the premerger period only by UP or SP, but not both, and in the postmerger period, only by UP. The two broad types of shippers identified reflect different levels of rail competition. The potential 2-to-1 traffic (served by BNSF) is considered more competitive than the traffic served solely by UP because direct rail competition was preserved or maintained for the potential 2-to-1 shippers, while the traffic solely-served by UP had only indirect competition, which was preserved through build- in/build-out and new facilities conditions. Finally, because our study focuses on potential 2-to-1 shippers, we included only the commodity groups for which BNSF had presence. Although BNSF officials told us they had not aggressively exercised their trackage rights for coal shipments in the Salt Lake City area, we included these shipments because coal is a major commodity shipped to and from the Salt Lake City area. Summary statistics of the commodities shipped to and from the Salt Lake City and Reno economic areas are provided in tables 4 and 5. The commodities include coal, chemicals, primary metals, farm products (such as corn and wheat), petroleum/coal, food, nonmetallic minerals, lumber/wood, and stone/clay/glass/concrete. Each of these commodities accounted for at least 10 percent of the traffic to or from an area. The share of BNSF’s potential 2-to-1 shippers to all shippers was mostly between 10 and 25 percent. (See table 4.) Also, the rail rates and the direct costs for the total traffic were very similar to the rates for the matched traffic. (See table 5.) The econometric model that we developed was estimated using an appropriate estimation technique. We also discuss the results of our study in terms of the effects on rail rates attributable to the merger and the effects of other factors. We used a reduced-form rate model of shipping a commodity between an origin and a destination because such a model is useful for analyzing the impact of a regulatory policy, such as a merger, on rates. The service crisis of 1997 and 1998 could potentially make the estimation results less reliable because the rates may not be at the market-clearing level. However, we included a CRISIS variable to account for this possible structural shift. The reduced-form model we used was as follows: The term “ln” is a natural logarithm, and “i” is representative of a commodity group. The β‘s are parameters to be estimated, and ε is the random-error term. A complete list of the variables used to estimate the regression model is presented in table 6. We could not directly incorporate certain factors into the model primarily because of data limitations. We estimated the regression model using the SAS SURVEYREG procedure, since the data are from stratified samples. This procedure is appropriate for dealing with a stratified sample because it adjusts both the coefficients and the standard errors of the estimates to account for the sampling design. The econometric model was run for different samples— shipments of the primary commodities to or from an economic area, and for subsamples of individual commodities and shippers. We tried different specifications of our basic model to check the robustness of our key model results. We found that the results were not highly sensitive to model specification. While we used a reduced-form specification, it is still possible that some of the explanatory variables on the right-hand side of the equation may be endogenous. Since there are no available instruments in a reduced-form model, we could not perform the usual test. Rather, we checked the robustness of our results by excluding possible endogenous variables. In particular, when DENSITY was excluded from the model, our findings regarding the effects of mergers on rates and the effects of the other factors on rates were essentially unchanged. It is also likely that COST is related to the variables TON, DISTANCE, and OWNERSHIP, which could produce unreliable results. In other specifications of the model, we eliminated the COST variable, but our key findings were robust to such specifications. Summaries of the effects of the merger on rates, based on the econometric results, are presented in table 7. The rates for shipments to and from the Reno and Salt Lake City areas generally would have declined for all the shippers as a result of the merger, especially in the Salt Lake City area. Although the effects of the merger on rates depend on both the potential cost savings from the merger and the exercise of any enhanced market power by the railroads, the UP/SP merger is generally expected to lower rates in those areas where the Board imposed trackage rights. We also compared the effects of the merger on rates charged to potential 2-to-1 shippers served by BNSF to rates charged to shippers served solely by UP in the same general locations. In particular, the results show that the rates charged to the potential 2-to-1 shippers served by BNSF were lower than the rates charged to the shippers served solely by UP for shipments of chemicals from the Salt Lake City area. The rate differentials for the Reno area were positive, but none was statistically significant. The result that rates for the potential 2-to-1 shippers served by BNSF were generally lower than rates charged to shippers served solely by UP is consistent with demand-based differential pricing, which reflects the differing transportation alternatives available to shippers. We found that the effects of other factors on rail rates during the period are generally consistent with what has been found in previous studies. (See results in tables 8 through 11 for all commodities.) We used the econometric results for all the commodities because most of these effects are not commodity-specific and can be better captured across commodities. The impact of COST on rates was positive and significant for traffic in each of the selected areas, meaning that rates were lower (or higher) as costs decreased (or increased). TON had mixed results, meaning that larger shipment volumes sometimes resulted in higher or lower rates. DISTANCE generally decreased rates. DENSITY, which captures the volume of traffic on the route used for a particular shipment, unambiguously decreased rates. This effect is consistent with decreasing costs in railroad operations, since increased shipment levels over a rail route spread fixed costs over larger volumes and reduce rates.OWNERSHIP had mixed results. CONTRACT rates were generally lower. Finally, the impact of CRISIS on rates was generally inconclusive. This is not unexpected, since most shipments are under contract and the crisis affected primarily the services that were provided rather than the rates. To compare the changes in rates due to the merger that we obtained from the econometric analysis to the overall changes in rates, we separated the overall changes in rates into changes due to the merger and changes due to other factors, such as costs and volume of shipments. The overall changes in rates were estimated using a difference in means analysis that compares the rates in the postmerger period with rates in the premerger period. We found that the overall changes in rates could be in the opposite direction from the rate changes due to the merger. For instance, for coal shipments from the Salt Lake City area, the overall changes in rates were about 10 percent higher, while the rate changes due to the merger alone would have been about 10 percent lower. On the other hand, for shipments of chemicals to the Reno area, the overall changes in rates were about 6 percent lower, while the rate changes due to the merger alone would have been about 21 percent higher. These illustrations indicate that a complete analysis of merger-related rate changes could benefit from the application of an analytical approach that identifies and determines the separate effects of the various factors, including those associated with a merger, affecting rail rates. | Railroads have been a primary mode of freight transportation for many years, especially for bulk commodities such as coal and grain. Over the last 25 years, the freight railroad industry has undergone substantial consolidation largely to reduce costs and increase efficiency and competitiveness. Some companies that rely on rail shipments are concerned that the mergers have reduced railroad competition and led to higher rail rates and poorer service. This report reviews (1) the role the Surface Transportation Board plays in reviewing proposed railroad mergers and overseeing mergers that have been approved and how post-merger oversight is conducted, (2) how the Board mitigates potential harm to competition, and (3) how the Union Pacific/Southern Pacific merger affected rail rates in selected geographic areas. GAO found that the Board reviews railroad merger proposals and approves those that are consistent with the public interest, ensures that any potential merger-related harm to competition is mitigated to preserve competition, and oversees mergers that have been approved. The Board imposes conditions on mergers to mitigate potential harm to competition. The Board also focuses on the overall direction and magnitude of rate changes when analyzing rail rates as part of merger oversight. It does not isolate the effects of mergers on rates from other effects. When GAO used this approach to analyze how the Union Pacific/Southern Pacific merger affected rail rates, it found that the merger reduced rates in four of six commodities studied. However, for two of the commodities, the merger put upward pressure on rates, even though other factors caused overall rates to decrease. By focusing on overall rate decreases, the Board will be unable to determine whether the decrease is due to the merger or other factors. |
This section discusses the federal oversight of food safety, past reviews of the federal food safety oversight system, and the status of federal efforts to address criteria for removing oversight of food safety from our High-Risk List. Of the 16 federal agencies that collectively administer at least 30 federal laws governing food safety and quality, FDA and FSIS have primary responsibility for food safety oversight. Table 1 summarizes the food safety responsibilities of all 16 agencies. As we said earlier, for more than 4 decades, we have reported on the fragmented nature of federal food safety oversight. For example, in our past work, we described how FDA is generally responsible for ensuring that eggs in their shells (referred to as shell eggs) are safe, wholesome, and properly labeled; FSIS is responsible for the safety of eggs processed into egg products; USDA’s Agricultural Marketing Service (AMS) sets quality and grade standards for shell eggs, such as Grade A; USDA’s Animal and Plant Health Inspection Service (APHIS) manages the program that helps ensure laying hens are free from Salmonella at birth; and FDA oversees the safety of the feed that hens eat. In addition, we reported that FDA has primary responsibility for regulating manufacturers of frozen cheese pizzas, FSIS has primary responsibility for regulating manufacturers of frozen pizzas with meat, and multiple additional federal agencies play roles in regulating the components of either type of pizza. Similarly, we have noted that FSIS inspects manufacturers of packaged open-face meat or poultry sandwiches (i.e., those with one slice of bread), but FDA inspects manufacturers of packaged closed-face meat or poultry sandwiches (i.e., those with two slices of bread). However, establishments producing closed-faced meat or poultry sandwiches intended for export to Canada can be inspected for Hazard Analysis and Critical Control Point (HACCP) compliance by FSIS under a voluntary inspection program, and samples collected by FSIS will be tested for certain pathogens by AMS. In an August 1998 report, the National Academies concluded that the fragmented federal food safety oversight system was not well-equipped to meet emerging challenges. In response to the academies’ report, the President established a Council on Food Safety later that year and charged it with developing a comprehensive strategic plan for federal food safety activities, among other things. The council’s Food Safety Strategic Plan, released on January 19, 2001, recognized the need for a comprehensive food safety statute and concluded that the organizational structure of the food safety system makes it more difficult to achieve future improvements in efficiency, efficacy, and allocation of resources based on risk. In October 2001, we recommended that USDA, HHS, and the Assistant to the President for Science and Technology, as joint chairs of the President’s Council on Food Safety, reconvene the council, which had disbanded earlier that year, to facilitate interagency coordination on food safety regulation and programs. In our prior work, we have also identified options for reducing fragmentation and overlap in food safety oversight, including alternative organizational structures. These options include establishing a single food safety agency, a food safety inspection agency, a data collection and risk analysis center, and a coordination mechanism led by a central chair. We also suggested that Congress might wish to assess the need for comprehensive, uniform, risk-based food safety legislation or to amend FDA’s and USDA’s existing authorities. (For descriptions of selected options, see app. II.) When we added the federal oversight of food safety to our list of high-risk areas in January 2007, we found that a challenge for the 21st century was to find a way for federal agencies with food safety responsibilities to integrate the myriad food safety programs and strategically manage their portfolios to promote the safety and integrity of the nation’s food supply. We noted that we had detailed problems with the fragmented federal food safety oversight system and had found that the system had caused inconsistent oversight, ineffective coordination, and inefficient use of resources. We stated that Congress and the executive branch could and should create the environment needed to look across the activities of individual programs within specific agencies and toward the goals that the federal government is trying to achieve. To that end, in the January 2007 High-Risk Update, we reported that we had recommended that a mechanism be put in place to facilitate interagency coordination on food safety regulations and programs. We also suggested that Congress and the executive branch work together to develop a government-wide performance plan for food safety. A number of actions have been taken since we added federal oversight of food safety to our High-Risk List in 2007. In March 2009, the President established the Food Safety Working Group (FSWG) to coordinate federal efforts and develop goals to make food safer. In January 2011, the FDA Food Safety Modernization Act (FSMA) was enacted, representing the largest expansion and overhaul of U.S. food safety authorities since the 1930s. Also in January 2011, the statutory framework for performance management in the federal government, originally set out in the Government Performance and Results Act of 1993 (GPRA), was updated by the GPRA Modernization Act of 2010 (GPRAMA). GPRAMA adds new requirements for addressing crosscutting efforts in federal strategic and performance planning that help drive collaboration and address fragmentation. For example, GPRAMA requires agencies’ strategic plans and performance plans to contain a description of how the agencies are working with other agencies to achieve their goals and objectives. GPRAMA requirements apply at the departmental or agency level, not to organizational components. In March 2011, we recommended that OMB, in consultation with the federal agencies having food safety responsibilities, develop an annually updated government-wide performance plan for food safety. We stated that a performance plan offers a framework to help ensure agencies’ goals are complementary and mutually reinforcing and to help provide a comprehensive picture of the federal government’s performance on food safety. Furthermore, we stated that such a plan could assist decision makers in balancing trade-offs and comparing performance when resource allocation and restructuring decisions are made. In December 2014, because OMB had not taken action to develop a government-wide performance plan for food safety and the FSWG was no longer meeting, we suggested matters for Congress to consider, including (1) directing OMB to develop a government-wide performance plan for food safety that includes results-oriented goals and performance measures and a discussion of strategies and resources and (2) formalizing the FSWG through statute to help ensure sustained leadership across food safety agencies over time. Congress has not taken action. We found that FDA and FSIS were involved in numerous mechanisms to facilitate interagency coordination on food safety; however, the mechanisms focused on specific issues and none provided for broad- based, centralized collaboration. As of September 2016, federal oversight of food safety remained on our High-Risk List. Table 2 shows nine selected collaborative mechanisms involving FDA and FSIS, as reported in December 2014. We have identified five criteria, all of which must be fully met for an area to be removed from our High-Risk List. In our February 2015 High-Risk Update, we found that for federal oversight of food safety, three of the criteria had been partially met, and two had not been met (see table 3). Our assessment of whether the criteria were met focused largely on efforts Congress and the executive branch had made toward developing a government-wide performance plan for food safety and establishing a centralized mechanism for broad-based collaboration, such as the FSWG. In our February 2015 High-Risk Update, we noted that, with the enactment of GPRAMA in January 2011, Congress and the executive branch demonstrated leadership commitment to improving collaboration across the federal government. We also noted that HHS and USDA had taken steps toward our December 2014 recommendation to implement GPRAMA’s crosscutting requirements for their food safety efforts but could more fully address crosscutting food safety efforts in their individual strategic and performance planning documents and thereby provide building blocks toward implementing our March 2011 recommendation that OMB develop a government-wide performance plan on food safety. However, as of February 2015, OMB had not taken action on our recommendation to develop such a plan. In addition, we noted that the President had demonstrated leadership commitment and progress by establishing the FSWG to coordinate federal efforts and develop goals to make food safer. However, as of February 2015, the working group was no longer meeting, and nothing had taken its place. Federal food safety agencies also have the capacity to participate in a centralized, collaborative mechanism on food safety—like the FSWG—but congressional action would be required to formalize such a mechanism through statute. HHS and USDA have taken some actions since 2014 to address fragmentation in the federal food safety oversight system, and OMB has focused on implementing FSMA, but USDA’s and OMB’s actions have not fully addressed our two recommendations for government-wide planning. Since 2014, HHS and USDA have continued and expanded collaboration on specific food safety issues, and HHS has updated its strategic plan to address interagency coordination on food safety. OMB has focused its efforts on working with agencies to facilitate implementation of FSMA. The facilitation, collaboration, and updates are positive steps, but USDA’s and OMB’s actions do not fully address our two recommendations for government-wide planning. The two agencies with primary responsibility for food safety within HHS and USDA—FDA and FSIS, respectively—continue to use the nine collaborative mechanisms that we reported on in December 2014, all of which focus on specific issues. For example, FDA and FSIS continue to collaborate with CDC through the Interagency Food Safety Analytics Collaboration to improve estimates of the most common sources of foodborne illnesses. According to CDC’s website, the three agencies teamed up to create this collaboration. Its goal is to improve coordination of federal food safety analytic efforts and address crosscutting priorities for food safety data collection, analysis, and use. FSIS and FDA also serve as the co-lead organizations for the food safety topic area under Healthy People 2020, a national health promotion and disease prevention initiative that provides 10-year national objectives for improving the health of all Americans and includes 42 topic areas. The food safety topic area has six objectives related to the goal of reducing foodborne illnesses in the United States, such as reducing infections caused by key pathogens transmitted commonly through food and increasing the proportion of consumers who follow key food safety practices. According to USDA officials, Healthy People 2020 informs their agency goals and their work with CDC and FDA. In addition, over the past 2 years, FDA and FSIS have developed one new collaborative mechanism, according to FDA and FSIS officials. The mechanism, called the Interagency Collaboration on Genomics and Food Safety (Gen-FS), also includes CDC and the National Institutes of Health (NIH). Gen-FS focuses on sequencing the complete DNA of pathogens for surveillance, detection and investigation of outbreaks, and antibiotic resistance for pathogens causing intestinal illnesses transmitted by food and other routes, according to FSIS officials. The Gen-FS steering committee meets monthly to discuss harmonization of training, laboratory methodologies, and data access and analysis, according to FDA officials. Furthermore, FDA officials said that implementing FSMA has been the agency’s major food safety focus over the past 2 years, and FDA is partnering with nongovernmental stakeholders, state and local governments, and federal agencies to ensure FSMA’s successful implementation. Under FSMA, FDA is responsible for more than 50 regulations, guidelines, and studies. This includes seven foundational rules. Table 4 provides additional information on the foundational rules. For example, FDA issued the final FSMA rule on produce, one of the foundational FSMA rules, in November 2015. The rule establishes science-based minimum standards for the safe growing, harvesting, packing, and holding of produce, meaning fruits and vegetables grown for human consumption. To develop the rule, which went into effect on January 26, 2016, FDA officials said they worked directly with farmers, which required a significant amount of collaboration with USDA and the states. In addition, these officials said they worked with the Environmental Protection Agency (EPA) on water quality and safety aspects of the produce rule, with the Department of Homeland Security on the intentional adulteration rule, and with the Department of Transportation on the sanitary transportation rule. Furthermore, in May 2016, we found that FDA had taken numerous steps to ensure meaningful and timely input from nonfederal officials during development of the FSMA-mandated rules on produce, human food, and animal food but did not fully meet its tribal consultation responsibilities. OMB staff told us that their main food safety-related focus since 2014 has been on meeting with agencies to oversee FSMA implementation. OMB staff stated that they meet with FDA and FSIS officials via conference calls on a regular basis to discuss the implementation of FSMA, as well as the agencies’ budgets, regulations, and food safety issues more broadly. These meetings occur at times separately and at times with both FDA and FSIS officials present, according to OMB staff. These staff also said that they work on an agency-specific basis, helping agencies develop agency-specific performance plans, talking to agencies about how to improve performance, and working with agencies to collaborate on FSMA implementation. In December 2014, we found that HHS and USDA did not fully address crosscutting food safety efforts in their individual strategic and performance planning documents and that doing so could help provide a comprehensive picture of the federal government’s performance on food safety. We recommended that both HHS and USDA more fully describe how they are working with other agencies to achieve food-safety-related goals in their strategic and performance planning documents, as required by GPRAMA, and the agencies agreed with our recommendation. Since then, in taking steps to update its strategic and performance planning documents to better address crosscutting food safety efforts, HHS implemented our recommendation. Specifically, in February 2015, HHS updated its strategic plan to more fully describe how it is working with other agencies to achieve its food-safety-related goals and objectives. Among other things, HHS described its collaboration with USDA, EPA, and others through collaborative mechanisms such as the National Antimicrobial Resistance Monitoring System, the Partnership for Food Protection (PFP), and the Food Emergency Response Network. However, USDA has not fully implemented our recommendation, although it has taken some steps toward doing so. For example, FSIS included more information on crosscutting food safety efforts in its fiscal year 2017-2021 strategic plan and in its draft fiscal year 2017 annual plan than it did in its prior strategic and annual plans. In its fiscal year 2017- 2021 strategic plan, it included a list of collaborations, and the draft fiscal year 2017 annual plan includes a section on enhancing collaboration with partners. In addition, FSIS officials told us that FSIS is partnering with CDC, FDA, and NIH on the HHS agency priority goal to reduce foodborne illness caused by Listeria. The priority goal includes (1) sequencing the complete DNA of Listeria strains to improve the detection and investigation of Listeria outbreaks and (2) FDA and FSIS jointly reporting on their activities to reduce Listeria at various points across the food supply chain. USDA plans to include information on interagency collaboration in its next strategic plan, according to USDA officials. As noted above, HHS’s and USDA’s efforts since 2014 are positive steps toward government-wide planning, but OMB has not addressed our recommendation for a government-wide plan for the federal food safety oversight system. Without an annually updated government-wide performance plan for food safety that includes results-oriented goals, performance measures, and a discussion of strategies and resources, which we recommended to OMB in March 2011, Congress, program managers, and other decision makers are hampered in their ability to identify agencies and programs addressing similar missions and to set priorities, allocate resources, and restructure federal efforts, as needed, to achieve long-term goals. Also, without such a plan, federal food safety efforts are not clear and transparent to the public. OMB staff told us that they were not aware of any current plans to develop a government-wide performance plan for food safety. OMB staff said that OMB works on an agency-specific basis, providing input on agencies’ performance plans and offering suggestions on how to improve performance. However, agencies’ individual performance plans alone do not provide the integrated perspective on federal food safety performance necessary to guide congressional and executive-branch decision making and inform the public about what federal agencies are doing to ensure food safety. A government-wide performance plan would provide a coordinated action plan for food safety and a plan for monitoring and measuring agencies’ activities. We continue to believe that a government- wide plan is important for federal food safety oversight efforts. Food safety and government performance experts identified the development and implementation of a national strategy for food safety as a first step toward improving the federal food safety oversight system. Experts identified examples of negative effects that continue to occur as a result of fragmentation in the federal food safety oversight system. These experts agreed that there is a compelling need to develop a national strategy to provide a framework for strengthening that system and addressing fragmentation and described five key elements that should be included in such a strategy. Developing a national strategy for food safety oversight could also provide a framework for addressing our March 2011 recommendation for a government-wide plan, our December 2014 matters for Congress to consider for leadership and planning, and criteria for removing federal food safety oversight from the High-Risk List. During the 2-day meeting we hosted with the assistance of the National Academies, food safety and government performance experts cited examples of the negative effects that continue to occur as a result of fragmentation in the federal food safety oversight system. These examples further illustrate negative effects we have highlighted in our past work, including our 2015 High-Risk Update. For example, experts noted that FDA and FSIS have different statutory authorities. One expert noted that the two agencies’ statutory authorities result in two fundamentally different approaches to inspections. FDA’s authority requires a risk-based approach, in which inspection rates vary depending on the level of risk associated with a food product. FSIS’s authority, in contrast, directs the agency to examine the carcasses and parts of covered animal species and all processed food products before they enter the food supply. Because of these differences, an expert raised questions about the proper allocation of resources based on risk. Commenting on the food safety system more broadly, several experts noted that the allocation of resources is not necessarily connected to the risk of foodborne illness. For example, one expert noted that at the federal level, FSIS and FDA receive close to the same amount of funding for food safety oversight but that FSIS is responsible for the safety of 20 percent of the food supply, and FDA is responsible for ensuring the safety of 80 percent of it. Furthermore, because FSIS must meet continuous inspection requirements, it may be allocating too many resources to inspecting low-risk food processing facilities that produce foods that do not pose substantial threats to public health, according to another expert. For example, the expert highlighted the differences in resource allocation by comparing inspection rates at facilities producing cheese and pepperoni pizzas. A production line at the facility producing cheese pizza, which is regulated by FDA, may be inspected once every 5 years. On the other hand, a production line producing pepperoni pizza, which is regulated by FSIS, is inspected daily. The expert said the risk of foodborne pathogens related to both types of pizza is low because the pizzas are cooked. While raw meat is a high-risk food, meat that is thoroughly cooked, such as pepperoni on pizza, does not pose the same level of risk because the process of cooking eliminates existing pathogens. The 19 experts attending our 2-day meeting agreed that there is a compelling need to develop a national strategy to provide a framework for strengthening the federal food safety oversight system and addressing fragmentation. The experts identified and described five key elements that should be included in a national strategy for food safety oversight. These five key elements follow. Purpose: The starting point for developing a national strategy includes defining the problem, developing a mission statement, and identifying goals. Leadership: The national strategy should establish sustained leadership to achieve progress in food safety oversight. The leadership should reside at the highest level of the administration and needs to have authority to implement the national strategy and be accountable for its progress. The strategy also needs to identify roles and responsibilities for implementing the national strategy and involve all stakeholders, including federal, tribal, state, and local government agencies; industry; consumer groups; academia; and key congressional committees. Resources: The national strategy should identify staffing and funding requirements and the sources of funding for implementing the strategy. Monitoring: The national strategy should establish milestones that specify time frames, baselines, and metrics to monitor progress. The national strategy should be sufficiently flexible to incorporate changes identified through monitoring and evaluation of progress. Actions: In addition to long-term actions, the national strategy should include short-term actions, such as improving training for food safety officials, to gain traction on improving the food safety system. Actions should focus on preventing, rather than reacting to, outbreaks of foodborne illnesses. For example, several experts mentioned modifying the statutes that FSIS implements, such as the Federal Meat Inspection Act and the Poultry Improvement Act, to align the authorities of USDA with the Federal Food, Drug, and Cosmetic Act, as amended by FSMA, which outlines FDA’s responsibilities. This could help ensure a consistent approach across food commodities. See appendix III for a list of actions identified by the experts that could be considered for inclusion in a national strategy for food safety oversight. The experts’ call for a national strategy for food safety oversight is consistent with our past work on national strategies. We found that complex interagency and intergovernmental efforts, which could include food safety, can benefit from developing a national strategy and establishing a focal point with sufficient time, responsibility, authority and resources to lead the effort. For example, in August 2007, we reported on another area involving significant coordination and collaboration across all levels of government, as well as the private sector: preparing for and responding to an influenza pandemic. We found that, as part of its efforts to address the potential threat of an influenza pandemic, the executive branch had developed a National Strategy for Pandemic Influenza and an associated implementation plan and had started working toward completing the plan’s action items. In February 2004, we reported that national strategies themselves are not endpoints, but rather, starting points, and, as with any strategic planning effort, implementation is the key. The five key elements of a national strategy identified by the experts are also consistent with characteristics we have identified as desirable in a national strategy. In our February 2004 report, we found that national strategies are not required, either by executive or legislative mandate, to address a single, consistent set of characteristics. However, on the basis of a review of numerous sources, we identified six desirable characteristics to aid responsible parties in further developing and implementing national strategies. Table 5 lists and describes the six desirable characteristics and shows how the elements of a national strategy for food safety oversight identified by experts align with the six desirable characteristics. Although the experts did not specify which entity should lead the national strategy, past efforts to develop high-level strategic planning for food safety have depended on leadership from entities within the Executive Office of the President (EOP), such as the Domestic Policy Council (DPC), the Office of Science and Technology Policy (OSTP), and OMB. For example, the President’s Council on Food Safety was co-chaired by OSTP, along with HHS and USDA, and involved staff and officials from OMB and the DPC among others. Similarly, the FSWG was led by USDA and HHS and was convened by the DPC. OMB staff and FDA officials stated that a national strategy for improving food safety could be beneficial. However, FDA officials cautioned that timing would be an important consideration given that FDA is focused on FSMA implementation. FSIS officials said that they would defer to OMB regarding questions on the potential benefit of a national strategy for food safety. OMB staff said that OMB relies on direction from the administration to determine national priorities. Entities within the EOP also play a leadership role in other ongoing strategies that require cross-agency collaboration. For example, since December 2013, OSTP and the National Security Council have led a multi-agency effort, including HHS and USDA, to develop the National Strategy for Combating Antibiotic-Resistant Bacteria, with a goal of preventing, detecting, and controlling outbreaks of antibiotic-resistant pathogens. In addition, OMB has established a cross-agency priority goal of improving science, technology, engineering, and mathematics education. Since May 2013, OSTP has taken a lead role, along with the National Science Foundation, in working with multiple agencies to implement a 5-year strategic plan. By developing a national strategy to guide the nation’s efforts to improve the federal food safety oversight system and address ongoing fragmentation, the appropriate entities within the EOP, in consultation with relevant federal agencies and other stakeholders, could provide a comprehensive framework for considering organizational changes and making resource decisions. Developing a national strategy for food safety oversight, as suggested by the experts, could provide a framework for addressing our March 2011 recommendation for a government-wide plan and our December 2014 matters for congressional consideration for leadership and government- wide planning. As we mentioned previously, we have found that complex interagency and intergovernmental efforts can benefit from developing a national strategy and establishing a focal point with sufficient time, responsibility, authority, and resources to lead the effort. The national strategy, as described by the experts and possessing the desirable characteristics described in our past work, could fulfill the intent behind our March 2011 recommendation for OMB to develop a government-wide performance plan for food safety. Such a strategy could include all of the elements of a government-wide performance plan for federal food safety oversight, such as government-wide goals and performance indicators. In addition to addressing our recommendation for a government-wide plan, to the extent that a national strategy for food safety oversight establishes sustained leadership for the issue, it could fulfill the intent behind our December 2014 matter for Congress to consider formalizing the FSWG through statute to help ensure sustained leadership across food safety agencies over time. In addition, developing and implementing a national strategy could provide a framework for addressing the five criteria for removing federal food safety oversight from our High-Risk List. As discussed previously, experts agreed that a national strategy should include sustained leadership, which could address the criterion for leadership commitment. In addition, the national strategy, by including information on resource requirements, actions, and milestones and metrics to monitor progress, could also meet our criteria for capacity, an action plan, and monitoring, respectively. Finally, depending on its contents, a national strategy could demonstrate progress in implementing corrective measures, the final criterion for removing federal food safety oversight from our High-Risk List. Since 2014, the primary federal agencies responsible for ensuring a safe food supply—FDA and FSIS—have taken actions to address fragmentation in the federal food safety oversight system. However, food safety and government performance experts who participated in the meeting we convened cited examples of the negative effects that continue to occur as a result of fragmentation in the federal food safety oversight system and generally agreed that there is a need for a national food safety strategy. These examples further illustrate negative effects that we have highlighted in our past work. The experts identified five key elements that should be included in such a strategy: stating the purpose, establishing sustained leadership, identifying resource requirements, monitoring progress, and including actions for gaining traction. These elements are consistent with characteristics that we have identified as desirable in a national strategy. By developing a national strategy to guide the nation’s efforts to improve the federal food safety oversight system and address ongoing fragmentation, the appropriate entities within the EOP, in consultation with relevant federal agencies and other stakeholders, could provide a comprehensive framework for considering organizational changes and making resource decisions. Experts identified the following stakeholders as key contributors to a national strategy for food safety: federal, tribal, state, and local government agencies; industry; consumer groups; academia; and key congressional committees. Such a national strategy also could provide a framework for addressing our recommendation for a government-wide plan, our matters for Congress to consider for leadership and planning, and criteria for removing federal food safety oversight from our High-Risk List. To guide the nation’s efforts to improve the federal food safety oversight system and address ongoing fragmentation, we recommend that the appropriate entities within the EOP, in consultation with relevant federal agencies and other stakeholders, develop a national strategy that states the purpose of the strategy, establishes high-level sustained leadership, identifies resource requirements, monitors progress, and identifies short- and long-term actions to improve the food safety oversight system. We provided a draft of this report to HHS, USDA, OMB, and DPC for their review and comment. In written comments, HHS did not comment on our recommendation to the EOP. USDA disagreed with the need for a national strategy but cited factors to consider should changes be proposed. USDA also discussed several points related to the report’s findings. HHS’s and USDA’s written comments are reproduced in appendixes IV and V, respectively. In addition, HHS and USDA provided technical corrections, which we incorporated as appropriate. Also, according to an e-mail from the Special Assistant to the President of the EOP, OMB and DPC did not have comments on the draft report. To guide the nation’s efforts to improve the federal food safety oversight system and address ongoing fragmentation, we recommended that the appropriate entities within the EOP, in consultation with relevant federal agencies and other stakeholders, develop a national strategy that states the purpose of the strategy, established high-level sustained leadership, identifies resource requirements, monitors progress, and identifies short- and long-term actions to improve the food safety oversight system. USDA stated that it is not yet convinced that developing and implementing a national strategy would result in significantly different outcomes in protecting public health by preventing foodborne illness with its partners. However, USDA also noted that, should major changes to the federal food safety system be proposed, it is imperative that they are data-driven, well-designed, collaborative, and ultimately, continue to enable the United States to have the safest food supply in the world. Even with USDA’s reservations, we continue to believe that a national strategy would provide a comprehensive framework for considering organizational changes and resource decisions to improve the federal food safety oversight system. USDA made a number of other comments related to the report’s findings. First, USDA stated the report does not appear to explain or acknowledge the depth and breadth of key federal agency efforts and activities to work together within the bounds of existing statutory authorities, particularly across FSIS, FDA, CDC, and other federal food safety partners. In addition, USDA said that the report appears to significantly underestimate the complexity of modifying statutes that FSIS and FDA currently implement with the intent of better alignment. Related to acknowledging the depth and breadth of key federal efforts and activities, in our December 2014 report, we identified and described numerous collaborative mechanisms involving FDA and FSIS to highlight these positive efforts, and for this report, we requested information on any additional collaborative mechanisms developed since 2014, which we included. However, we found and continue to believe that these mechanisms focus on specific issues and do not provide for broad-based, centralized collaboration that would allow FDA, FSIS, and other agencies to look across their individual food safety programs and determine how they all contribute to federal food safety goals. Related to underestimating the complexity of modifying statutes that FSIS and FDA currently implement, we discuss modifying statutes as an example of the numerous actions that experts identified could be considered for inclusion in a national strategy for food safety oversight. We envision that ultimately it will be up to the stakeholders participating in such a strategy to decide which actions to pursue. Second, USDA stated that FSIS continues to strongly disagree with the draft report in that it undervalues and diminishes the many collaborative mechanisms that are in place among FSIS and FDA, as well as with CDC and other federal and non-federal food safety and public health partners. In addition, FSIS said that the characterizations of all collaborations as “narrow” and “specific,” and the implication that broad-based collaboration does not occur through FSIS’s deeply integrated engagement, is inaccurate. Further, USDA stated that the implication that the collaborations are not well-targeted or sufficient appears to reflect a lack of understanding of how agencies with food safety/public health responsibilities operate in sync with each other. USDA also stated that FSIS’s activities with FDA, CDC, and other food safety partners are strategic, highly outcome- and mission-driven, and fully address the GPRAMA crosscutting requirements for federal strategic and performance planning that help drive collaboration and address fragmentation. USDA stated that it is important to note that we did not present or provide any evidence for any area where sufficient collaboration does not occur. As we said earlier, we found and continue to believe that these collaborative mechanisms focus on specific issues and do not provide for broad-based, centralized collaboration that would allow FDA, FSIS, and other agencies to look across their individual food safety programs and determine how they all contribute to federal food safety goals. Third, USDA stated that it appreciated that our report attempts to recognize new collaborations since 2014; however, it does not include three of four new collaborations on which FSIS provided testimonial or written information to us—the HHS agency priority goal for foodborne Listeria monocytogenes illnesses interagency effort, PFP, and the One Health Initiative. Related to the HHS agency priority goal, in the report, we stated that FSIS officials told us that FSIS is partnering with CDC, FDA, and NIH on the HHS agency priority goal to reduce foodborne illness caused by Listeria. PFP and the One Health Initiative were established prior to 2014; however, in the report, we did discuss PFP in the context of presenting examples of collaborative mechanisms involving FDA and FSIS that we reported on in December 2014 and collaborative mechanisms described by HHS in its updated strategic plan. Fourth, USDA stated that our report indicates that USDA has not fully implemented our prior recommendation to address crosscutting food safety efforts in its strategic and performance planning documents, because USDA, at the department level, did not alter its just-published fiscal year 2014-2018 strategic plan to mention food safety collaboration across USDA’s large, broad, multi-agency portfolio. USDA stated that FSIS believes our continued focus on USDA not editing and reissuing its departmental strategic plan to include such reference to be misplaced. Further, USDA stated that food safety collaboration is addressed in the USDA fiscal year 2014-2018 strategic plan’s key food safety illness indicator, which directly reflects FSIS’s broad, long-standing collaborative activity with FDA and CDC associated with Healthy People, and in FSIS’s fiscal year 2011-2016 and fiscal year 2017-2021 strategic plans. In our December 2014 report, we stated that GPRAMA requires agencies to include in their strategic plan a description of how they are working with other agencies to achieve their goals and objectives. In addition, we stated that GPRAMA does not apply to organizational components of agencies. Instead, agencies are expected to work with their components to implement GPRAMA requirements in a manner that is most useful to the whole organization. In December 2014, we found several relevant crosscutting efforts that were not identified in USDA’s fiscal year 2014- 2018 strategic plan, and recommended that USDA more fully describe in its strategic and performance planning documents how it is working with other agencies to achieve its food-safety-related goals and objectives. In December 2014, USDA concurred with our recommendation, and USDA plans to include information on interagency collaboration in its next strategic plan, according to USDA officials. Fifth, USDA stated that it is concerned about the implication that many of the possible actions to include in a national strategy do not require congressional approval and can be taken by executive branch agencies without such approval; USDA stated that they cannot. In addition, USDA stated that while the recommendation for executive action is quite general, the specifics, as outlined in appendix III of our report, appear far too prescriptive for us to typically recommend, and place disproportionate value on expert opinion rather than on data-driven analysis. Further, USDA stated that we appear to place importance on expert opinions, including citing many statements that were factually incorrect or misrepresented in a prior draft, and some of whose testimonial statements we removed. This included statements that implicitly supported assertions that FDA’s statutory authorities could be appropriate to apply to the products that FSIS regulates. USDA stated that no data, study, or evidence supports this approach as being more protective of public health and prevention of foodborne illness. USDA also stated that it continues to be concerned about our selective and dominant use of expert opinion studies to support its findings. In addition, USDA stated that we cite certain prior studies and panels from 1998, 2001, and more recently, yet other studies, such as one in 2002 by a White House- established Policy Coordinating Committee, concluded that the goals of the Administration were better advanced through enhanced interagency coordination rather than through, for example, the development of legislation to create a single food safety agency. Related to USDA’s concerns about the actions listed in appendix III requiring congressional approval and appearing too prescriptive, the purpose of the appendix was to present a list of actions identified by the experts that could be considered for inclusion in a national strategy for food safety. As we stated earlier, we envision that ultimately it will be up to the stakeholders participating in such a strategy to decide which actions to pursue. Related to USDA’s concern about the apparent importance we place on expert opinions and our use of expert opinion studies to support our findings, we selected food safety and government performance experts on the basis of the relevance of their knowledge; their prominence in the public discourse on food safety issues; and their diversity of experience working in food safety, such as through prior experience working at senior levels for FDA, CDC, or USDA or current experience working for the food industry. We took steps to confirm the accuracy of information the experts provided before including it in our final product. Related to USDA’s concern about the development of legislation to create a single food safety agency, we discuss this option in an appendix in which we list a number of options we have identified in our past work to improve the federal food safety oversight. Sixth, USDA stated that in prior reports, we have written that programs are put on the High-Risk List because of their vulnerabilities to fraud, waste, abuse, or mismanagement, or are most in need of transformation to address economy, efficiency, or effectiveness challenges. Given this standard, USDA said that it continues to assert that food safety should no longer be listed as high risk. We have identified five criteria, all of which must be fully met for an area to be removed from our High-Risk List. In our February 2015 High-Risk Update, we found that for federal oversight of food safety, three of the criteria had been partially met, and two had not been met. Our assessment of whether the criteria were met focused largely on efforts Congress and the executive branch had made toward developing a government-wide performance plan for food safety and establishing a centralized mechanism for broad-based collaboration, such as the FSWG. However, we found that USDA’s and OMB’s actions since 2014 have not fully addressed the need for government-wide planning. In addition, we acknowledge that congressional action would be required to formalize in statute a centralized, collaborative mechanism on food safety, like the FSWG; however, federal food safety agencies do have the capacity to participate in such a mechanism. We believe that a national strategy for food safety could provide a framework for addressing the five criteria for removing federal food safety oversight from our High-Risk List. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretary of Health and Human Services; the Secretary of Agriculture; the Director, Office of Management and Budget; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-3841 or morriss@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. On June 9 and 10, 2016, with the assistance of the National Academies of Sciences, Engineering, and Medicine, we convened a 2-day meeting of food safety and government performance experts to discuss fragmentation in the U.S. federal food safety oversight system and suggest actions to improve that system. Table 6 lists the experts who participated in the meeting, along with their affiliations. We selected food safety and government performance experts on the basis of the relevance of their knowledge; their prominence in the public discourse on food safety issues; and their diversity of experience working in food safety, such as through prior experience working on food safety at senior levels in the federal government as well as through current work in food-related industries, nongovernmental research organizations, state agencies, foreign food safety agencies, academia, and advocacy groups. In our previous work, we have identified several options to improve the federal food safety oversight system. These options include establishing a coordination mechanism led by a central chair, a food safety inspection agency, a data collection and risk analysis center, and a single food safety agency and are described in table 7. During the 2-day meeting we convened with the assistance of the National Academies of Sciences, Engineering, and Medicine, experts identified a number of actions to consider including in a national strategy to improve food safety oversight. At least 10 of the 19 experts agreed that each of these actions described in table 8 could be appropriate to consider for inclusion in a national strategy, but not all of the experts agreed that every identified action should be considered. We are not endorsing any of these actions. These actions were identified by experts for consideration. In addition to the contact named above, Anne K. Johnson (Assistant Director), Kevin Bray, Candace Carpenter, Stephen Cleary, Michelle Duren, Ellen Fried, Kirsten Lauber, Benjamin T. Licht, Marya Link, Rekha Vaitla, Walter Vance, and Sarah Veale made key contributions to this report. | Although the U.S. food supply is generally considered safe, foodborne illness remains a common, costly, yet largely preventable public health problem. The safety and quality of food involves 16 federal agencies. For more than 4 decades, GAO has reported on the fragmented federal food safety oversight system. Because of potential risks to the economy and to public health and safety, food safety has remained on GAO's list of high-risk areas since 2007. GAO was asked to examine efforts toward and options for addressing fragmentation in the federal food safety oversight system. This report (1) describes the actions HHS, USDA, and OMB have taken since 2014 to address fragmentation and evaluates the extent to which these agencies have addressed two prior GAO recommendations for government-wide planning and (2) assesses actions that food safety and other experts suggest are needed to improve the federal food safety oversight system. GAO convened an expert meeting, reviewed agency documents, and interviewed agency officials. Since 2014, the Department of Health and Human Services' (HHS) Food and Drug Administration (FDA) and the U.S. Department of Agriculture's (USDA) Food Safety and Inspection Service (FSIS), the federal agencies with primary responsibility for food safety oversight, have taken some actions to address fragmentation in the federal food safety oversight system, and HHS has updated its strategic plan to address interagency coordination on food safety. However, USDA has not yet fully implemented GAO's December 2014 recommendation that it describe interagency collaboration on food safety in its strategic and performance planning documents. In addition, the Office of Management and Budget (OMB) has not addressed GAO's March 2011 recommendation to develop a government-wide plan for the federal food safety oversight system. At a 2-day meeting GAO hosted in June 2016, 19 food safety and other experts agreed that there is a compelling need to develop a national strategy to address ongoing fragmentation and improve the federal food safety oversight system. This is consistent with a prior GAO finding that complex interagency and intergovernmental efforts can benefit from developing a national strategy. The experts identified the following key elements of such a strategy: Purpose: The starting point for a national strategy includes defining the problem, developing a mission statement, and identifying goals. Leadership: The national strategy should establish sustained leadership at the highest level of the administration with authority to implement the strategy and be accountable for its progress. The strategy also needs to identify roles and responsibilities and involve all stakeholders. Resources: The national strategy should identify staffing and funding requirements and the sources of funding for its implementation. Monitoring: The national strategy should establish milestones that specify time frames, baselines, and metrics to monitor progress. The strategy should be sufficiently flexible to incorporate changes identified through monitoring and evaluation of progress. Actions: In addition to long-term actions, the national strategy should include short-term actions to gain traction in improving the food safety system. Actions should focus on preventing, rather than reacting to, outbreaks of foodborne illnesses. These elements are consistent with characteristics GAO has previously identified as desirable in national strategies. Past efforts to develop high-level strategic planning for food safety have depended on leadership from the Executive Office of the President (EOP). By developing a national strategy to guide the federal food safety oversight system and address ongoing fragmentation, the EOP, in consultation with relevant federal agencies and other stakeholders, could provide a framework for making organizational and resource decisions. Among other things, such a strategy also could provide a framework for addressing GAO's recommendation for a government-wide plan and for removing food safety oversight from GAO's High-Risk List. GAO recommends that the appropriate entities within the EOP, in consultation with stakeholders, develop a national strategy to guide the federal food safety oversight system and address ongoing fragmentation. HHS, OMB, and the Domestic Policy Council did not comment on the recommendation. USDA disagreed with the need for a national strategy but cited factors to consider should changes be proposed. GAO believes the recommendation should be implemented. |
In the 1980s and early 1990s, the solvency of the federal depository insurance funds was threatened when hundreds of thrifts and banks failed. Taxpayers were forced to bailout the insurance fund for thrifts, and the insurance fund for banks had a negative balance for the first time in its history. This situation prompted concern and considerable debate about the need to reform federal deposit insurance and regulatory oversight. In response, Congress passed the Federal Deposit Insurance Corporation Improvement Act of 1991 (FDICIA) to, among other things, improve the supervision and examination of depository institutions and to protect the federal deposit insurance funds from further losses. Among its various provisions, FDICIA added two new sections to the Federal Deposit Insurance Act of 1950—sections 38 and 39—referred to as the Prompt Regulatory Action provisions. The Prompt Regulatory Action provisions required federal regulators to institute a two-part system of regulatory actions that would be triggered when an institution fails to meet minimum capital levels or safety-and-soundness standards. Enactment of this two-part system was intended to increase the likelihood that regulators would respond promptly and forcefully to prevent or minimize losses to the deposit insurance funds from failures. The Federal Deposit Insurance Corporation (FDIC), Federal Reserve System (FRS), and two agencies within the Department of the Treasury—the Office of the Comptroller of the Currency (OCC) and the Office of Thrift Supervision (OTS)—share responsibility for regulating and supervising federally insured banks and thrifts in the United States. FDIC regulates state-chartered banks that are not members of FRS; FRS regulates state-chartered, member banks; OCC regulates nationally chartered banks; and OTS regulates all federally insured thrifts, regardless of charter type. The regulators carry out their oversight responsibilities primarily through monitoring data filed by institutions, conducting periodic on-site examinations, and taking actions to enforce federal safety-and-soundness laws and regulations. From 1980 to 1990, record losses absorbed by the federal deposit insurance funds highlighted the need for a new approach in federal regulatory oversight. Sharply mounting thrift losses over the decade bankrupted the Federal Savings and Loan Insurance Corporation (FSLIC), which was the agency responsible for insuring thrifts until 1989, despite a doubling of premiums and a special $10.8 billion recapitalization program. During this period, a record 1,020 thrifts failed at a cost of about $100 billion to the deposit insurance funds for thrifts. Banks also failed at record rates. From 1980 to 1990, a total of 1,256 federally insured banks were closed or received FDIC financial assistance. Estimated losses to the bank insurance fund for resolving these banks was about $25 billion. These losses resulted in the bank insurance fund’s incurring annual net losses in 1988, 1989, and 1990 that jeopardized the fund’s solvency for the first time since FDIC’s inception. Industry analysts have recognized many factors as contributing to the high level of thrift failures from 1980 to 1990. For example, thrifts faced increased competition from nondepository institutions, such as money-market funds and mortgage banks, as well as periods of inflation, recession, and fluctuating interest rates during that period. High interest rates and increased competition for deposits during the decade also created a mismatch between interest revenues from the fixed rate mortgages that constituted the bulk of the thrift industry’s assets and the cost of borrowing funds in the marketplace. Increased powers granted to thrifts in a period during which supervision did not keep pace has also been cited by some analysts, including us, as contributing to the problems of the industry. Regulators and industry analysts have associated a number of factors with the problems of banks during the 1980s. First, banks suffered losses resulting from credit risk—risk of default on loans—in an environment of prolonged economic expansion and increasingly volatile interest rates. The decade began with crises in agricultural loans and loans to developing nations. Next, unrepaid energy loans took a toll and led to the downfall of several major banks, including Continental Illinois in Chicago and First RepublicBank in Texas. As the decade came to a close, highly leveraged transactions and the collapse of commercial real estate markets, in which banks had been heavy lenders, depleted the capital structures of some major East Coast and West Coast banks and led to their failures. One factor we and others cited as contributing to the problems of both thrifts and banks during this period was excessive forbearance by federal regulators. Regulators had wide discretion in choosing the severity and timing of enforcement actions that they took against depository institutions with unsafe and unsound practices. In addition, regulators had a common philosophy of trying to work informally and cooperatively with troubled institutions. In a 1991 report, we found that this approach, in combination with regulators’ wide discretion in the oversight of financial institutions, had resulted in enforcement actions that were neither timely nor forceful enough to (1) correct unsafe and unsound banking practices or (2) prevent or minimize losses to the insurance funds. Regulators themselves recognized that their supervisory practices in the 1980s failed to adequately control risky practices that led to the numerous thrift and bank failures. Congress passed two major laws to address the thrift and bank crisis of the 1980s. The first, the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA), was enacted primarily in response to the immediate problems surrounding FSLIC’s bankruptcy and troubles in the thrift industry. FIRREA created a new regulator for the thrift industry, OTS, and a new insurance fund, the Savings Association Insurance Fund (SAIF), to replace the bankrupt FSLIC. In addition, FIRREA increased the enforcement authority of both bank and thrift regulators. For example, FIRREA expanded the circumstances under which regulators could assess civil money penalties and increased the maximum penalty to $1 million per day. FIRREA also authorized FDIC to terminate a bank’s or thrift’s deposit insurance on the basis of unsafe and unsound conditions. The second major piece of legislation, FDICIA, contains several provisions that were intended to collectively improve the supervision of federally insured depository institutions. Specifically, FDICIA requires a number of corporate governance and accounting reforms to (1) strengthen the corporate governance of depository institutions, (2) improve the financial reporting of depository institutions, and (3) help in the early identification of emerging safety-and-soundness problems in depository institutions. In addition, FDICIA contains provisions that were intended to improve how regulators supervise depository institutions. Among the corporate governance and accounting reforms, FDICIA establishes generally accepted accounting principles as the standard for all reports and statements filed with the regulators. FDICIA also requires the management and auditors of depository institutions to annually report on their financial condition and management. The report is to include management’s assessment of (1) the effectiveness of the institution’s internal controls and (2) the institution’s compliance with designated laws and regulations. In addition, FDICIA requires the institution’s external auditors to report separately on management’s assertions. Furthermore, FDICIA requires the institutions to have an independent audit committee composed of outside independent directors. Among the supervision provisions, FDICIA requires regulators to perform annual on-site examinations of insured banks and thrifts (an 18-month cycle was allowed for qualified smaller institutions with assets of less than $100 million). FDICIA’s sections 131 and 132 added two new sections to the Federal Deposit Insurance Act (sections 38 and 39) that require the implementation of a “trip wire” approach to increase the likelihood that regulators will address the problems of troubled institutions at an early stage to prevent or minimize loss to the insurance funds. Section 38 creates a capital-based framework for bank and thrift oversight that is based on the placement of financial institutions into one of five capital categories. Capital was made the centerpiece of the framework because it represents funds invested by an institution’s owners, such as common and preferred stock, that can be used to absorb unexpected losses before the institution becomes insolvent. Thus, capital was seen as serving a vital role as a buffer between bank losses and the deposit insurance system. Although section 38 does not in any way limit regulators’ ability to take additional supervisory action, it requires federal regulators to take specific actions against banks and thrifts that have capital levels below minimum standards. The specified regulatory actions are made increasingly severe as an institution’s capital drops to lower levels. Section 38 requires regulators to establish criteria for classifying depository institutions into the following five capital categories: well-capitalized, adequately capitalized, undercapitalized, significantly undercapitalized, and critically undercapitalized. The section does not place restrictions on institutions that meet or exceed the minimum capital standards—that is, those that are well- or adequately capitalized—other than prohibiting the institutions from paying dividends or management fees that would drop them into the undercapitalized category. A depository institution that fails to meet minimum capital levels faces several mandatory restrictions or actions under section 38. The mandatory actions are intended to ensure a swift regulatory response that would prevent further erosion of an institution’s capital. Specifically, section 38 requires an undercapitalized institution to submit a capital restoration plan detailing, among other things, how the institution is going to become adequately capitalized; restrict its asset growth during any quarter so that its average total assets for the quarter do not exceed the preceding quarter’s average total assets, unless certain conditions are met; and receive prior regulatory approval for acquisitions, branching, and new lines of business. Section 38 allows regulators to take additional actions against an undercapitalized institution, if deemed necessary. It also requires regulators to closely monitor the institution’s condition and its compliance with section 38’s requirements. Section 38 requires regulators to take more forceful corrective measures when institutions become significantly undercapitalized. Regulators must take 1 or more of 10 specified actions, including (1) requiring the sale of equity or debt or, under certain circumstances, requiring institutions to be acquired by or merged with another institution; (2) restricting otherwise allowable transactions with affiliates; and (3) restricting the interest rates paid on deposits by the institution. Each of these three steps is to be mandatory unless the regulator determines that taking such steps would not further the purpose of section 38, which is to resolve the problems of insured depository institutions at the least possible long-term loss to the insurance fund. Other specific actions available to the regulators include imposing more stringent asset growth limitations than required for undercapitalized institutions or requiring the institution to reduce its total assets; requiring the institution, or its subsidiaries, to alter, reduce, or terminate an activity that the regulator determines poses excessive risk to the institution; improving management by (1) ordering a new election for the institution’s board of directors, (2) dismissing directors or senior executive officers, and/or (3) requiring an institution to employ qualified senior executive officers; prohibiting the acceptance, including renewal and rollover, of deposits requiring prior approval for capital distributions from holding companies having control of the institution; and requiring divestiture by (1) the institution of any subsidiary that the regulator determines poses a significant risk to the institution, (2) the parent company of any nondepository affiliate that regulators determine poses a significant risk to the institution, and/or (3) any controlling company of the institution if the regulator determines that divestiture would improve the institution’s financial condition and future prospects. Regulators can also require any other action that they determine would better resolve the problems of the institution with the least possible long-term loss to the insurance funds. Finally, section 38 prohibits significantly undercapitalized institutions from paying bonuses to or increasing the compensation of senior executive officers without prior regulatory approval. Section 38 requires more stringent action to be taken against critically undercapitalized institutions. After an institution becomes critically undercapitalized, regulators have a 90-day period in which they must either place the institution into receivership or conservatorship or take other action that would better prevent or minimize long-term losses to the insurance fund. In either case, regulators must obtain FDIC concurrence with their actions. Section 38 also prohibits critically undercapitalized depository institutions from doing any of the following without FDIC’s prior written approval: entering into any material transaction (such as investments, expansions, acquisitions, and asset sales), other than in the usual course of business; extending credit for any highly leveraged transaction; amending the institution’s charter or bylaws, except to the extent necessary to carry out any other requirement of any law, regulation, or order; making any material change in accounting methods; engaging in any covered transaction; paying excessive compensation or bonuses; or paying interest on new or renewed liabilities at a rate that would increase the institution’s weighted average cost of funds to a level significantly exceeding the prevailing rates of interest on insured deposits in the institution’s normal market area. In addition, section 38 prohibits a critically undercapitalized institution from making any payment of principal or interest on the institution’s subordinated debt beginning 60 days after becoming critically undercapitalized. Finally, section 38 permits regulators to, in effect, downgrade an institution by one capital level if regulators determine that the institution is in an unsafe and unsound condition or that it is engaging in an unsafe and unsound practice. For example, regulators can treat an adequately capitalized institution as undercapitalized if the institution received a less than satisfactory rating in its most recent examination report for asset quality, management, earnings, or liquidity. This downgrading would then allow regulators to require the institution’s compliance with those restrictions applicable to undercapitalized institutions, such as limits on the institution’s growth. Thus, section 38 allows regulators to take enforcement actions against an institution that presents a danger to the insurance fund by virtue of a factor other than its capital level. In addition to the specific provisions of section 38, another section of FDICIA provides FDIC with the authority to appoint a conservator or receiver for undercapitalized institutions that meet certain criteria. To limit deposit insurance losses caused by factors other than inadequate capital, section 39 directs each regulator to establish standards defining safety and soundness in three overall areas: (1) operations and management; (2) asset quality, earnings, and stock valuation; and (3) compensation. Section 39 originally made the safety-and-soundness standards applicable to both insured depository institutions and their holding companies, but the reference to holding companies was deleted in 1994. The section originally required regulators to prescribe safety-and-soundness standards through the use of regulations. For the operations and management standards, section 39 did not provide specific requirements other than requiring regulators to prescribe standards on internal controls, internal audit systems, loan documentation, credit underwriting, interest rate exposure, and asset growth. For asset quality, earnings, and—to the extent feasible—stock valuation, the section initially required regulators to establish quantitative standards. (See the next section for a discussion of amendments made to section 39’s original provisions.) Under compensation standards, regulators were to prescribe, among other things, standards specifying when compensation, fees, or benefits to executive officers, employees, directors, or principal shareholders would be considered excessive or could lead to material financial loss. Section 39 initially contained a number of provisions concerning the failure to meet the regulators’ prescribed safety-and-soundness standards. One key provision of the section directed regulators to require a corrective action plan from institutions or holding companies that fail to meet any of the standards. Such plans were to specify the steps an institution or a holding company was taking or intended to take to correct the deficiency. Section 39 directed the regulators to establish specific deadlines for submission and review of the plans. If an institution or a holding company failed to submit or implement the plan, regulators were mandated to issue an order requiring the institution or holding company to correct the deficiency and to take one or more of the following remedial actions as considered appropriate: restrict the institution’s or holding company’s asset growth, require the institution or holding company to increase its ratio of tangible restrict interest rates paid on deposits, and/or require the institution or holding company to take any other action that the regulator determines would prevent or minimize losses to the insurance fund. Section 39 also initially required regulators to take at least one of the first three previously mentioned remedial actions against institutions that (1) fail to meet any of the operational and/or asset quality standards listed in FDICIA, (2) have not corrected the deficiency, and (3) either commenced operations or experienced a change in control within the preceding 24 months or experienced extraordinary growth during the prior 18 months of failing to meet the standards. The Riegle Community Development and Regulatory Improvement Act of 1994 (CDRI) was passed on September 23, 1994, and contains more than 50 provisions that were intended to reduce bank regulatory burden and paperwork requirements. Among its provisions, CDRI amended some of section 39’s requirements to provide regulators with greater flexibility and to respond to concerns that section 39 would subject depository institutions to undue “micromanagement” by the regulators. The CDRI amendments allow regulators to issue the standards in the form of guidelines instead of regulations. If guidelines are used, the amendments give the regulators the discretion to decide whether a corrective action plan will be required from institutions that are found not to be in compliance with the standards. Finally, the amendments eliminate the requirement that regulators issue quantitative standards for asset quality and earnings and exclude holding companies from the scope of the standards. CDRI did not change section 39’s original provisions regarding the content and review of any plan required as a result of noncompliance with section 39’s safety-and-soundness standards. Thus, regulators still are required to issue regulations governing the contents of the plan, time frames for the submission and review of the plans, and enforcement actions applicable to the failure to submit or implement a required plan. Since the passage of FDICIA in 1991, the financial condition of the bank and thrift industries has improved substantially. As shown in table 1.1, the net income of banks more than doubled between 1991 and 1995, reaching a record high of $48.8 billion in 1995. Table 1.1 also shows that the net income of thrifts grew dramatically in 1992 from the 1991 level, decreased slightly in 1993 and 1994, and grew to a record $7.6 billion in 1995. In the period from 1992 through 1995, the number of bank and thrift failures declined from their 1980 to 1990 levels. For example, 6 banks failed in 1995, compared with 169 bank failures in 1990. The low number of bank failures in recent years has allowed the bank insurance fund to rebuild its reserve level. After falling to a record low of negative $7 billion in 1991, the fund grew to over $25 billion in 1995. The recapitalization of the bank insurance fund allowed FDIC to reduce the deposit insurance assessment rate paid by commercial banks twice in the latter part of 1995. As a result, commercial banks are paying the lowest average assessment rate in history. Despite the improved performance of the thrift industry, the thrift insurance fund remained undercapitalized as of December 1995. FDICIA required FDIC to increase the bank and thrift insurance funds’ reserve balances to at least 1.25 percent of the estimated insured deposits of insured institutions within 15 years of enactment of a recapitalization schedule. FDIC achieved this reserve ratio for the bank insurance fund on May 31, 1995. However, SAIF is not expected to achieve its required reserve ratio until 2002, according to FDIC. Thus, insurance fund premiums paid by thrifts remain significantly higher than those paid by commercial banks. The principal objective of this review was to assess the progress and results of the federal regulators’ implementation of FDICIA’s Prompt Regulatory Action provisions. Specifically, we assessed (1) the efforts of federal regulators to implement sections 38 and 39 and (2) the impact of sections 38 and 39 on federal oversight of the bank and thrift industries. To assess the federal regulators’ efforts to implement sections 38 and 39, we compared the legislative provisions with the implementing regulations and guidelines developed and issued by the regulators. In addition, we asked for and reviewed additional guidance developed by OCC and FRS. We concentrated our assessment on OCC and FRS because the FDIC and Treasury Offices of the Inspector General (OIG), respectively, had performed similar reviews of FDIC’s and OTS’ implementation of section 38. To the extent possible, we used the results of the FDIC OIG effort to compare and contrast with the results of our review of OCC’s and FRS’ implementation of section 38. We did not include the Treasury OIG’s results because the OIG was in the process of finalizing its evaluation. However, the OIG reviews did not assess FDIC’s or OTS’ implementation of section 39. We also assessed OCC’s and FRS’ implementation of section 38 by analyzing the supervisory actions used on the 61 banks that were undercapitalized (including those that were significantly and critically undercapitalized) for section 38 purposes. We identified the 61 banks using financial data (call reports) obtained from FDIC for the quarters ending December 1992 through December 1994. In the case of OCC, we looked at all of the 52 undercapitalized banks that were located in OCC’s Western, Southwest, and Northeast districts. These data provided us with coverage of 68 percent of all OCC-regulated banks that were undercapitalized during that period. For FRS, we looked at all nine undercapitalized banks under the jurisdiction of FRS’ Atlanta, Dallas, and San Francisco district banks. Doing so resulted in a coverage of 56 percent of all FRS-regulated banks that were undercapitalized during that period. While our results are not projectable to all undercapitalized banks under OCC’s and FRS’ jurisdiction, our results are representative of the OCC and FRS locations that we visited. As part of our assessment of (1) OCC’s and FRS’ efforts to implement sections 38 and 39 and (2) the impact of the sections on regulatory oversight, we interviewed OCC and FRS officials in the previously mentioned locations as well as in Washington, D.C. We obtained the officials’ views on the legislative intent underlying sections 38 and 39 and the evolution of the final regulations and guidelines. We also had discussions with the officials about regulatory actions, both under their traditional enforcement and section 38 authority, taken against the 61 banks that we reviewed. Additionally, we interviewed FDIC and OTS officials to obtain information on the interagency process used to develop the safety-and-soundness standards required to implement section 39. To assess the impact of sections 38 and 39 on the regulatory oversight of banks and thrifts, we used the 61 banks that we determined were undercapitalized for section 38 purposes to evaluate OCC’s and FRS’ use of their section 38 authority (reclassification and directives) versus the use of traditional enforcement tools. In addition, we reviewed OCC’s and FRS’ internal guidance and policies regarding the use of section 38 versus their other enforcement tools. We also obtained and analyzed information on the number of banks that the regulators had determined were undercapitalized for section 38 purposes versus the number of banks they had identified as being “problem” banks. We analyzed various articles and economic literature issued on (1) the impact of sections 38 and 39 on the regulatory process and (2) the implications of a capital-based regulatory approach in general. Additionally, we used OIG and our prior report results and recommendations to assess the content of the implementing regulations and guidelines as well as the likely impact of section 38 on the regulatory process. We did our work from November 1994 to September 1996 in accordance with generally accepted government auditing standards. We provided a draft of this report to the Federal Reserve Board, the Comptroller of the Currency, the Federal Deposit Insurance Corporation, and the Office of Thrift Supervision for their review and comment. A summary of the agencies’ comments and our evaluation are included at the end of chapter 3. The agencies’ comment letters are reprinted in appendixes III to VI. Staff of OCC and FDIC also provided additional technical comments on the draft report, which were incorporated as appropriate. Regulators have taken steps to implement FDICIA’s Prompt Regulatory Action provisions. However, the financial condition of banks and thrifts has improved since the passage of FDICIA in 1991 because relatively few institutions have been considered undercapitalized under section 38 as of September 1996. Our review of a sample of 61 undercapitalized banks found that OCC and FRS have generally met section 38 requirements regarding the identification of undercapitalized institutions, the receipt and review of capital restoration plans, and the closure of critically undercapitalized institutions. Our finding was consistent with the FDIC OIG’s conclusions regarding FDIC’s implementation of section 38. All three regulators (OCC, FRS, and FDIC) had virtually no experience in using their section 38 reclassification authority and had used their section 38 authority to take enforcement actions on a relatively small number of institutions. As of September 1996, none of the regulators had used section 39 enforcement powers. All but two of the safety-and-soundness standards required for the implementation of section 39 became effective in August 1995. The remaining two standards—asset quality and earnings—became effective on October 1, 1996, allowing for the full implementation of section 39. The regulators explained that they missed the December 1993 statutory deadline for the implementation of section 39 due to (1) the complication of developing standards on an interagency basis, (2) the concern of ensuring that the standards did not unnecessarily add to the existing regulatory burden of depository institutions, and (3) the knowledge that Congress was considering amending section 39’s requirements governing the standards. Regulations issued by the four regulators to implement section 38 requirements are intended to ensure that prompt regulatory action is taken whenever an institution’s capital condition poses a threat to federal deposit insurance funds. Banks and thrifts have increased their capital levels since the passage of FDICIA so that relatively few financial institutions have been subject to section 38 regulatory actions in the 3 years that the regulations were in effect. Between December 1992—the effective date of the regulations—and December 1995, the number and total assets of institutions that were undercapitalized had decreased from about 2 percent in 1992 to less than one-quarter of 1 percent of all banks and thrifts by 1995. The regulators jointly developed the implementing regulations for section 38 and based the criteria for the five capital categories on international capital standards and section 38 provisions. The four regulators specifically based the benchmarks for an adequately capitalized institution on the Basle Committee requirement, which stipulates that an adequately capitalized international bank must have at least 8 percent total risk-based capital and 4 percent tier 1 capital. For the definition of a critically undercapitalized institution, the regulators adopted section 38’s requirement of a tangible equity ratio of at least 2 percent of total assets. The regulators based the criteria for the remaining three capital categories on these two benchmarks. As shown in table 2.1, three capital ratios are used to determine if an institution is well-capitalized, adequately capitalized, undercapitalized, or significantly undercapitalized. A well-capitalized or adequately capitalized institution must meet or exceed all three capital ratios for its capital category. To be deemed undercapitalized or significantly undercapitalized, an institution need only fall below one of the ratios listed for its capital category. Although not shown in the table, a fourth ratio—tangible equity—is used to categorize an institution as critically undercapitalized. Any institution that has a 2-percent or less tangible equity ratio is considered critically undercapitalized, regardless of its other capital ratios. So far, relatively few financial institutions have been categorized as undercapitalized and, thus, subject to section 38 regulatory actions. This situation was due, in part, to the improved financial condition of the bank and thrift industries. The implementation of section 38 also provided institutions with strong incentives to increase their capital levels to avoid the mandatory restrictions and supervisory actions associated with being undercapitalized. As shown in table 2.2, the number of financial institutions whose reported financial data indicated undercapitalization, based on section 38 implementing regulations, steadily declined between December 1992 and December 1995. The beginning of the decline coincided with the December 1992 implementation of section 38. Data reported by financial institutions indicated that 252 banks and thrifts, or about 2 percent of those institutions, were undercapitalized in December 1992, including those that were significantly and critically undercapitalized. As of December 1995, only 29 banks and thrifts, or about one-quarter of 1 percent of all banks and thrifts, fell into the undercapitalized categories. Our review of regulatory actions at 61 sample banks indicated that OCC and FRS complied with the basic requirements of section 38 and its implementing regulations. Specifically, OCC and FRS categorized the banks in accordance with section 38 criteria and notified undercapitalized banks of the restrictions and regulatory actions associated with their capital category. In addition, OCC and FRS typically obtained and reviewed the required capital restoration plans within the time frames specified by section 38. Moreover, the two regulators generally took action to close the critically undercapitalized banks as required by section 38. Both regulators had limited experience with issuing section 38 directives or using their reclassification authority. The FDIC OIG reported similar results regarding FDIC’s implementation of section 38. OCC and FRS correctly identified and categorized the 61 sampled banks using criteria specified in section 38 legislation and implementing regulations. While primarily relying on call reports, they also used the on-site examination process to identify undercapitalized banks. The regulators then sent notices to those banks to inform the banks of their undercapitalized status and the associated section 38 mandatory restrictions, requirements, and regulatory responses. In the jurisdictions of the offices that we visited, OCC and FRS identified a total of 61 banks as being undercapitalized at some point from December 1992 through December 1994. The two regulators identified 60 banks as undercapitalized on the basis of the call report data reported to the regulators on a quarterly basis. FRS identified an additional bank as being undercapitalized on the basis of the results of an on-site safety-and-soundness examination. Table 2.3 shows the distribution of the banks in our sample by regulator and section 38 capital category. OCC and FRS sent the required notices to the management of the 61 banks in our sample informing them of their banks’ undercapitalized status. The notification letters advised the banks of the mandatory requirements and restrictions associated with their section 38 capital category. For significantly and critically undercapitalized banks, the notification letters also pointed out the additional mandatory and discretionary regulatory responses or actions associated with their section 38 capital categorization. OCC and FRS generally met section 38 requirements governing capital restoration plans (CRP). Section 38 requires banks to prepare a CRP within 45 days of becoming undercapitalized and allows regulators 60 days to review the CRP. For the 61 banks that we reviewed, OCC and FRS were generally successful in getting banks to submit the plans on time and in meeting the required time frames for reviewing and approving or rejecting the plans. Section 38 provisions require that CRPs prepared by undercapitalized institutions contain certain elements. Specifically, the section requires that CRPs specify the steps that the institution will take to become adequately capitalized, the levels of capital the institution will attain during each year the plan will be in effect, how the institution will comply with the restrictions or requirements applicable to its undercapitalization capital category, and the types and levels of activities in which the institution will engage. Section 38 prohibits regulators from accepting a CRP unless it (1) contains the previously mentioned required elements, (2) is based on realistic assumptions and is otherwise likely to succeed, and (3) would not appreciably increase the institution’s riskiness. Holding companies are required to guarantee the institution’s compliance with the CRP and to provide adequate assurance of performance. Although the notification letters sent to the 61 undercapitalized banks in our review indicated that a CRP was required, only 44 banks submitted a CRP. Of the 17 banks that did not submit CRPs, 15 experienced conditions within the first few months of becoming undercapitalized that, according to the regulator, precluded the need for a CRP. Specifically, nine failed, two merged with other banks, one was voluntarily liquidated, and three became adequately capitalized. OCC chose not to pursue obtaining CRPs from the remaining two banks. In one case, OCC deferred its enforcement efforts pending the results of an ongoing investigation by the Federal Bureau of Investigation and local enforcement authorities into potential criminal activity by the bank’s management. In the second case, OCC issued a section 38 directive instead of formally enforcing the requirement that the bank submit a CRP to achieve corrective action in a more timely fashion. OCC and FRS were generally successful in getting the 44 institutions that submitted CRPs to meet the 45-day requirement. As shown in table 2.4, 10 banks exceeded the 45-day requirement, but most had submitted CRPs within 55 days. OCC and FRS were typically successful in meeting the 60-day time frame for reviewing the 44 CRPs submitted by the banks in our sample. As shown in table 2.5, the regulators met the 60-day requirement on all but one applicable case where data were available to make a determination. Of the 44 CRPs submitted by the banks that we looked at, OCC and FRS rejected 30 of the CRPs as inadequate and required those banks to revise and resubmit them. The regulators used the criteria specified in section 38 legislation to determine whether a CRP was acceptable. Ultimately, the regulators approved 29 of the CRPs submitted by the undercapitalized banks that we reviewed. Of the 15 banks whose CRPs were not approved, 10 ultimately failed. One of the 15 banks merged with another bank, and the remaining 4 banks obtained enough capital to eliminate the need for a CRP. As required by section 38, OCC and FRS have generally taken action to close critically undercapitalized banks within a specified time frame. Under section 38, regulators are required to close critically undercapitalized institutions within 90 days of the institutions’ becoming critically undercapitalized unless the regulator and FDIC concur that other actions would better protect the insurance funds from losses. As previously shown in table 2.3, there were 25 critically undercapitalized banks in our sample. OCC and FRS closed 17 of these banks because they were critically undercapitalized. Fifteen of the 17 banks were closed within the prescribed 90-day period. In the case of the two banks that were closed after the 90-day deadline had expired, regulators approved the delay to allow FDIC more preparation time for the orderly closure of the banks. For the remaining 8 critically undercapitalized banks in our sample, 1 merged and the other 7 improved their capital position above the critically undercapitalized level before the end of the 90-day period. From December 1992 to September 1996, OCC and FRS used their section 38 authority to initiate directives against 8 of the 61 banks in our sample. Section 38 requires regulators to take specific regulatory actions against significantly undercapitalized institutions and to make the use of these actions discretionary for other undercapitalized institutions. In those instances in which section 38 directives were used, both OCC and FRS complied with the governing requirements of section 38 legislation and implementing regulations. As previously discussed in chapter 1, section 38 mandates regulators to take at least 1 of 10 specified actions against significantly undercapitalized institutions. The section also provides regulators with discretionary authority to take any of the 10 specified actions that they consider appropriate against undercapitalized institutions. OCC used directives against a relatively small number of the banks in our sample. Of the 52 OCC-regulated banks we reviewed, 16 were significantly undercapitalized at some time between December 1992 and December 1994, according to their call report data. Thus, unless the status of the banks changed, OCC would have been expected to have initiated a directive against the 16 banks to take the enforcement actions mandated by section 38. However, OCC only initiated directives against five of these banks. Seven of the remaining 11 banks either failed, merged, or improved their capital status within 90 days of becoming significantly undercapitalized, thus eliminating the need for OCC to issue a directive. OCC officials told us that directives were not initiated against the remaining four significantly undercapitalized banks because they were already subject to formal enforcement actions that OCC believed were similar to those that would be covered by directives. Thus, initiating a directive would have duplicated the existing, ongoing enforcement actions. FRS initiated directives against three of the seven FRS-regulated banks in our sample that were categorized as significantly undercapitalized at some point between December 1992 and December 1994. According to FRS, the need for it to issue directives was precluded for three significantly undercapitalized banks because they improved their capital status, merged with another institution, or were voluntarily liquidated shortly after becoming significantly undercapitalized. FRS did not initiate a directive against the remaining significantly undercapitalized bank because the applicable corrective actions were already under way in connection with existing federal and state enforcement actions and in connection with the bank’s CRP. From December 1992 to September 1996, OCC and FRS used their reclassification authority in two instances. Section 38 authorizes bank regulators under certain circumstances to downgrade, or treat as if downgraded, an institution’s capital category if (1) it is in an unsafe or unsound condition or (2) it is deemed by the regulator to be engaging in an unsafe or unsound practice. Reclassifying an institution to the next lower capital category allows regulators to subject the institution to more stringent restrictions and sanctions. According to OCC officials, OCC would use its section 38 reclassification authority only if its traditional enforcement actions had not been successful in correcting a bank’s problems. OCC officials told us that they prefer to use their traditional enforcement authority for several reasons. One reason was the broader range of options that OCC’s traditional enforcement actions provide both in the areas covered by the enforcement action as well as in the degree of severity of the action. Another reason that OCC prefers to use its traditional enforcement actions is the bilateral nature of these actions. According to OCC officials, traditional enforcement actions, such as a formal written agreement between the regulator and an institution, may achieve greater acceptance by the institution for taking corrective action than the unilateral nature of section 38 reclassifications and/or directives. However, OCC officials said that reclassification under section 38 can sometimes allow them to initiate certain actions faster (i.e., through directives) than would be possible using their traditional enforcement actions. In the one case involving OCC reclassification, the agency reclassified a bank from adequately capitalized to undercapitalized because (1) OCC believed the bank was operating in an unsafe and unsound condition that would impair its capital levels and (2) the bank had not complied with earlier OCC enforcement actions. The reclassification allowed OCC to initiate a directive that, among other requirements, mandated the dismissal of a senior bank official and a director who OCC believed were responsible for the bank’s deteriorated condition. Despite OCC’s use of its reclassification authority and a section 38 directive, the bank’s condition deteriorated further until it failed 8 months later. FRS has an internal policy that requires all problem banks, which it defined as banks with a composite rating of 4 or 5, to be considered operating in an unsafe and unsound condition and, thus, candidates for reclassification. Between December 1992 and December 1994, 58 banks had a FRS-assigned composite rating of 4 or 5. In its only use of its reclassification authority, FRS reclassified a well-capitalized bank to adequately capitalized because of continuous deterioration in the bank’s asset quality, earnings, and liquidity. This bank’s capital levels subsequently deteriorated to the point where it was considered significantly undercapitalized. The bank has since improved its capital to the well-capitalized category and is no longer considered to be a problem institution by FRS. In September 1994, the FDIC OIG reported that FDIC had generally complied with the provisions of section 38 and its implementing regulations. Table 2.6 compares the three regulators’ implementation of specific section 38 provisions. As of September 1996, regulators had not used their section 39 enforcement authority against an institution. In July 1995, regulators issued final guidelines and regulations to implement parts of section 39. Specifically, the regulators issued standards governing operations and management and compensation. They also issued requirements for submission and review of compliance plans. The regulators issued the remaining standards required for the full implementation of section 39—asset quality and earnings—in August 1996. FDICIA had established a deadline of December 1, 1993, for the implementation of section 39. Regulators said they were unable to meet that deadline because of (1) the difficulty of jointly developing the standards, (2) the concerns of regulators and financial institutions that the implementation of section 39 could increase existing regulatory burden for banks and thrifts, and (3) the knowledge that Congress was considering amending the section 39 requirements to provide regulators with greater flexibility and discretion in their implementation of the section. According to the regulators, developing and issuing safety-and-soundness standards was complicated by the interagency process and by concerns about the potential regulatory burden associated with the standards. Unlike the process for promulgating capital standards under section 38, which used the Basle Accord as a reference point, the regulators had no generally accepted standards to use as the basis for the safety-and-soundness standards. In addition, the regulators told us that the legislative history for section 39 did not provide specific guidance on the standards envisioned by Congress. Furthermore, the regulators wanted to ensure that the section 39 standards did not increase the bank and thrift industries’ regulatory burden without a corresponding benefit to the federal deposit insurance funds and taxpayers. OCC and FRS officials said that the lack of generally agreed upon standards for the areas covered by section 39 contributed to delays in developing and issuing the section’s standards. They explained that regulators consider numerous variables in assessing an institution’s safety and soundness. As a result, developing standards on an interagency basis for areas such as internal controls and interest rate exposure was difficult. According to the officials, the various regulators had different viewpoints as to how specific or general the standards should be. On July 15, 1992, the regulators issued a joint solicitation of comments on the section 39 safety-and-soundness standards. In soliciting the views of the banking industry on the form and content of the standards, the regulators said that they were concerned with “establishing unrealistic and overly burdensome standards that unnecessarily raise costs within the regulated community.” The four regulators collectively received over 400 comment letters, primarily from banks and thrifts. According to the regulators, the comments strongly favored adopting general standards, rather than specific standards, to avoid regulatory “micromanagement.” The regulators considered the public comments in developing the proposed standards that were published on November 18, 1993. The regulators proposed standards for the following three areas required by section 39: (1) operations and management, (2) asset quality and earnings, and (3) compensation. According to the notice of proposed rulemaking, regulators proposed general standards, rather than detailed or quantitative standards, to “avoid dictating how institutions are to be managed and operated.” However, as required by section 39 before its amendment in 1994, the regulators proposed two quantitative standards—a maximum ratio of classified assets-to-capital and a formula to determine minimum earnings sufficient to absorb losses without impairing capital. Section 39 also required the regulators to set, if feasible, a minimum ratio of market-to-book value for publicly traded shares of insured institutions as a third quantitative standard. The regulators determined that issuing such a standard was technically feasible, but they concluded that it was not a reasonable means of achieving the objectives of the Prompt Regulatory Action provisions. The regulators explained that an institution’s stock value can be affected by factors that are not necessarily indicative of an institution’s condition, such as the performance of the general stock market and industry conditions. As a result, the regulators believed that a market-to-book value ratio would not be an operationally reliable indicator of safety and soundness. Therefore, the regulators ultimately decided against proposing a market-to-book value ratio as a third quantitative standard. The proposed regulations also described procedures for supervisory actions that were consistent with those contained in the section 39 legislation for institutions failing to comply with standards. Specifically, the proposed regulations required institutions to prepare and submit a compliance plan within 30 days of being notified by the regulator of their noncompliance. The plan was to include a description of the steps the institution intended to take to correct the deficiency. Regulators would then have 30 days to review the plan. In addition, the proposed regulations specified enforcement actions regulators would take if an institution failed to submit an acceptable compliance plan or failed to implement the plan. The regulators collectively received 133 comment letters, primarily from financial institutions, in response to the November 18, 1993, notice of proposed rulemaking. According to the four regulators, those who commented generally found the agencies’ proposed standards, including the two quantitative standards, acceptable. However, some of those who commented criticized the proposed quantitative standards as inflexible and overly simplistic. OCC and FRS officials attributed further delays in implementing section 39 to their knowledge that in the period from late 1993 to mid-1994, Congress was considering legislation that would amend section 39’s requirements. Congress was considering amending section 39 to reduce the administrative requirements for insured depository institutions consistent with safe-and-sound banking practices. After CDRI was passed in September 1994, regulators needed additional time to revise the standards they proposed in November 1993 to take advantage of the additional flexibility provided by the section 39 amendments. On July 10, 1995, the regulators published final and proposed guidelines and regulations to implement section 39, as amended. The final guidelines covered operational and managerial standards, including internal controls, information systems, internal audit systems, loan documentation, credit underwriting, interest rate exposure, and asset growth, and compensation standards. The final guidelines were effective in August 1995. Along with the final guidelines, regulators proposed new standards for asset quality and earnings. The final standards for asset quality and earnings were issued on August 27, 1996, with an effective date of October 1, 1996. The final standards contained in the guidelines are less prescriptive on the institutions than those proposed in November 1993. For example, under internal controls and information systems, the guidelines specified that the “institution should have internal controls and information systems, that are appropriate to the size of the bank and the nature and scope of its activities.” In addition, the regulators used the additional flexibility provided by CDRI to eliminate the two previously proposed quantitative standards for classified assets and earnings. According to the regulators, the use of general rather than specific standards was supported by the overwhelming number of commenters responding to the regulators’ request for comments on the section 39 safety-and-soundness standards. Moreover, the use of guidelines instead of regulations gives the regulators flexibility in deciding whether to require a compliance plan from an institution found to be in noncompliance with the standards. The regulators issued regulations addressing the (1) required content of compliance plans, (2) time frames governing the preparation and review of a plan, and (3) regulatory actions applicable to the failure to submit or comply with a plan. The compliance plan regulations were issued jointly on July 10, 1995, with the section 39 guidelines governing the operational, managerial, and compensation standards. Both the guidelines and regulations became effective in August 1995. FDICIA’s Prompt Regulatory Action provisions granted additional enforcement tools to regulators and provided more consistency in the treatment of capital-deficient institutions. However, sections 38 and 39, as implemented, raise questions about whether regulators will act early and forcefully enough to prevent or minimize losses to the insurance funds. Section 38 does not require regulators to take action until an institution’s capital drops below the adequately capitalized level. However, depository institutions typically experience problems in other areas, such as asset quality and management, long before these problems result in impaired capital levels. Moreover, regulators have wide discretion governing the application of section 39 because the guidelines and regulations implementing section 39, as amended, do not (1) establish clear and specific definitions of unsound conditions and practices or (2) link such conditions or practices to specific mandatory regulatory actions. Other initiatives that have been undertaken as a result of FDICIA, as well as the regulators’ recognition of the need to be more proactive in preventing unsafe and unsound practices, may help increase the likelihood that sections 38 and 39 will be used to provide prompt and corrective regulatory action. FDICIA’s corporate governance and accounting reform provisions were designed to improve management accountability and facilitate early warning of safety-and-soundness problems. In addition, FDICIA requires regulators to revise the risk-based capital standards to ensure that reported capital accurately reflected the institution’s risk of operations. Regulators have also announced new initiatives to improve monitoring and control of bank risk-taking, but these initiatives have not been fully implemented or tested. The success of these initiatives, coupled with the regulators’ willingness to use their various enforcement authorities, including sections 38 and 39, will be instrumental in determining whether losses to the insurance funds are prevented or minimized in the future. Available evidence suggests that the implementation of the section 38 capital standards between 1992 and 1995, along with other factors, has benefited the bank and thrift industries and may have helped improve federal oversight. Specifically, the section 38 standards (1) provide financial institutions with incentives to raise equity capital, (2) should help regulators prevent seriously troubled institutions from taking actions that could compound their losses, and (3) should help ensure more timely closure of near-insolvent institutions. In addition, regulatory officials have stated that section 38 serves as an important supplemental enforcement tool. According to the regulators and banking industry analysts, section 38 provides depository institutions with strong incentives to raise additional equity capital. These officials explained that financial institutions were concerned about the potential ramifications of becoming undercapitalized, and the institutions raised additional equity capital to avoid potential sanctions. Once the implementing regulations were issued, depository institutions had clear benchmarks as to the levels of capital they needed to achieve to avoid mandatory regulatory intervention. Since the implementation of section 38, thanks in part to record industry profits, the capital levels of banks and thrifts have reached their highest levels since the 1960s. Another benefit of the section 38 capital standards is that they should help prevent certain practices and conditions that rapidly eroded the capital of troubled institutions from 1980 to 1990 and contributed to deposit insurance fund losses. For example, section 38 standards impose growth restrictions to prevent undercapitalized and significantly undercapitalized institutions from trying to “grow” their way out of financial difficulty. As a result, it should be more difficult for these institutions to rapidly expand their asset portfolios and increase potential insurance fund losses, as many thrifts did during the 1980s. Section 38 also requires regulators to prohibit undercapitalized institutions from depleting their remaining capital by paying dividends. OCC and FRS officials told us that another benefit of section 38 is the mandatory closure rule for critically undercapitalized institutions. These officials explained that before the implementation of section 38, regulators typically waited until an institution had 0-percent equity capital before closing it as insolvent. The officials also said that under section 38, they now have a clear legal mandate for closing problem institutions at 2-percent tangible equity capital, which should provide the insurance funds with a greater cushion against losses. Regulatory officials we contacted also said that section 38 serves as a useful supplement to their traditional enforcement authority. For example, OCC officials said that section 38 directives allow for the prompt removal of bank officials when the agency believes such officials are responsible for the bank’s financial and operational deterioration. OCC officials said that before FDICIA, removing such individuals took longer, sometimes up to several months. Although the capital-based regulatory approach strengthens federal oversight in several ways, by itself it has significant limitations as a mechanism to provide early intervention to safeguard the insurance funds. Capital is a lagging indicator of a financial institution’s deterioration. Troubled institutions may already have irreversible financial and operational problems that would inevitably result in substantial insurance fund losses by the time their capital deteriorates to the point where mandatory enforcement actions are triggered under section 38. In addition, troubled institutions often fail to report accurate information on their true financial conditions. As a result, many troubled institutions that have serious safety-and-soundness problems may not be subject to section 38 regulatory actions. Capital has been a traditional focus for regulatory oversight because it is a reasonably obvious and accepted measure of financial health. However, our work over the years has shown that, although capital is an important focus for oversight, it does not typically begin to decline until an institution has experienced substantial deterioration in other components of its operations and finances. It is not unusual for an institution’s internal controls, asset quality, and earnings to deteriorate for months, or even years, before conditions require that capital be used to absorb losses. As a result, regulatory actions, such as requirements for capital restoration plans or growth limits, may have only marginal effects because of the extent of deterioration that may have already occurred. Relating regulatory actions to capital alone has another inherent limitation in that reported capital levels do not always accurately reflect troubled institutions’ actual financial conditions. Troubled institutions have little incentive to report the true level of problem assets or to establish adequate reserves for potential losses. As a result, some institutions’ reported capital levels were often artificially high. The reporting of inaccurate capital levels was evident from 1980 to 1990 as many of the troubled institutions, which reported some level of capital before failing, ultimately generated substantial losses to the insurance fund. Thus, capital-driven regulatory responses would likely have had limited effectiveness since the institutions were already functionally insolvent. As illustrated by the following example, troubled institutions’ reported capital levels can plummet rapidly in times of economic downturn. In the 1980s, many New England banks, with average equity capital ratio levels exceeding the regulatory minimum requirements then in existence, were engaged in aggressive high-risk commercial real estate lending. These banks frequently ignored basic risk diversification principles by committing a substantial percentage of their lending portfolios to construction, multifamily housing, and commercial real estate lending—in some cases as high as 50 percent. This practice tied their future financial health to those industries. When the New England economy fell into recession in the late 1980s and early 1990s, many of the poorly managed banks in the region experienced a deterioration in their asset quality, earnings, and liquidity well before their capital levels declined. For example, once regulators recognized the recession’s effect on the Bank of New England portfolios, examiners required the bank to adversely classify an increasing number of loans—especially commercial real estate loans whose repayment was questionable due to the economic downturn. As the level of classified loans increased, the examiners required the Bank of New England to establish reserves for potential loan losses, which reduced the bank’s earnings. Subsequently, the bank suffered continued earnings deterioration and had to use its capital to absorb those losses. The Bank of New England’s managers and regulators had few options for maintaining solvency and, ultimately, for minimizing insurance fund losses. The available options included reducing the institution’s inventory of classified loans by selling assets, raising capital through public offerings, or selling the institution to a healthy buyer. The managers’ and regulator’s ability to carry out these strategies was constrained by the region’s economic downturn, since few investors were willing to purchase the assets of problem banks or to inject new capital into them without some form of financial assistance from FDIC. Ultimately, the bank failed, resulting in a loss to the bank insurance fund of $841 million. Other failed banks in the New England area followed a similar pattern, resulting in substantial losses to the insurance fund. Another reason that section 38, used alone, is a limited mechanism to protect deposit insurance funds, is that most troubled institutions do not fall into undercapitalized categories, including some that ultimately fail. Consequently, regulators overseeing even the most troubled institutions generally would not be compelled to initiate mandatory enforcement actions under section 38. We reviewed data compiled by FDIC that showed that many severely troubled institutions in the period from December 1992 to December 1995 did not fall into section 38’s undercapitalized categories. Therefore, these institutions were not subject to the section’s mandatory enforcement actions. On a quarterly basis, FDIC reports on the number of “problem” institutions. These institutions have regulator-assigned composite ratings of 4 or 5 because they typically have severe asset quality, liquidity, and earnings problems that make them potential candidates for failure. These institutions are also typically subject to more intensive oversight, including more frequent examinations by regulators and more frequent required reporting by the institutions on their financial conditions. As of December 31, 1995, 193 banks and thrifts were on FDIC’s problem institution list. However, only 29 institutions were categorized as undercapitalized under section 38 criteria. We made similar comparisons for 1992 through 1995 and found that only 15 to 24 percent of the problem institutions were categorized as undercapitalized under section 38 criteria (see table 3.1). Moreover, a recent study assessed the effectiveness of the current section 38 capital standards in identifying problem institutions and mandating enforcement actions by applying the section 38 standards to the troubled banks of an earlier period. The study concluded that the majority of banks that experienced financial problems between 1984 and 1989 would not have been subject to the capital-based enforcement actions of section 38, if they had been in effect. For example, the study found that 54 percent of the banks that failed within the subsequent 2 years would have been considered to be well- or adequately capitalized between 1984 and 1989. Thus, even if the section 38 standards had been in place in the 1980s, these troubled banks would not have been subject to section 38’s mandatory restrictions and supervisory actions. The study attributed the limitations that the current section 38 standards have in identifying troubled financial institutions to weaknesses in the risk-based capital ratio used by the regulators. Specifically, the study stated that the risk-based ratio does not (1) account for the fact that many banks do not adequately reserve for potential loan losses or (2) assign an adequate risk weight to cover the level of adversely classified assets that a bank may have on its books. Although the regulators are in the process of revising the risk-based capital standards, the revisions announced as of September 1996 do not address the two previously mentioned factors. The regulators’ efforts to revise the risk-based capital standards are discussed later in this chapter and in appendix I. The 1994 failure of one of the banks reviewed by the Treasury OIG, Mechanics National Bank of Paramount, California, illustrated some of the limitations of section 38 capital standards. The Treasury OIG found that despite OCC’s aggressive use of section 38 enforcement actions, OCC did not reverse the bank’s decline or prevent material loss to the bank insurance fund. The bank’s failure also demonstrated that severely troubled banks may not be subject to section 38’s restrictions and mandatory enforcement actions for a substantial period. According to the Treasury OIG report, the Mechanics National Bank pursued an aggressive growth strategy between 1988 and 1991 that contributed substantially to its failure. The bank concentrated its loan portfolio in risky service station loans and speculative construction and development projects. Under a Small Business Administration lending program, the bank also developed a significant portfolio of loans that was poorly underwritten and inadequately documented. In 1990, a downturn in the California economy generated a substantial deterioration in the bank’s loan portfolio. In 1991, OCC issued a cease-and-desist order against the bank that required substantial improvements in the bank’s operations and financial condition. Despite the cease-and-desist order, the bank’s asset quality and earnings continued to deteriorate over the next several years. The Treasury OIG report said that when section 38 capital standards became effective in December 1992, the Mechanics National Bank had a ratio of classified assets-to-capital of about 309 percent and had experienced losses of $4.3 million during 1992. OCC had just completed an examination of the bank in December 1992, which concluded that the bank was likely to fail. At that time, despite apparent asset quality and earnings problems, the bank’s capital had not deteriorated to the point where it was undercapitalized according to section 38 criteria. The bank’s capital ratios fell within the adequately capitalized category. The bank continued to be categorized as adequately capitalized during the first and second quarters of 1993, despite its high levels of classified assets and mounting losses. In July 1993, OCC reclassified the bank to the undercapitalized level. On January 10, 1994, OCC notified the bank that it was critically undercapitalized because its total capital-to-asset ratio had fallen below 2 percent. The regulators closed the bank in April 1994. Although the Treasury OIG report criticized OCC’s supervision and enforcement activities for the period between 1988 and 1991, the report found that the agency’s use of section 38 enforcement authority during 1993 and 1994 was appropriate. For example, the OIG report highlighted OCC’s use of its section 38 reclassification authority to remove two Mechanics National Bank officers who were thought to be largely responsible for the bank’s problems. OCC also used its section 38 authority to close the bank on April 1, 1994, within 90 days of the notification of its critically undercapitalized status. Nevertheless, OCC’s enforcement actions under section 38 were largely ineffective in minimizing the losses that were already embedded in the bank’s loan portfolio before it fell to the undercapitalized level. The bank’s estimated loss to the insurance fund of $37 million represented 22 percent of the bank’s total assets of $167 million. The impact of section 38’s implementation on minimizing losses to the insurance funds is difficult to assess. Between 1985 and 1989, losses to the bank insurance fund ranged from approximately 12 to 23 percent of the assets of failed banks with a 5-year weighted average of about 16 percent. As we reported in 1991, this high rate of losses indicated that regulators were not (1) taking forceful actions that effectively prevented dissipation of assets or (2) closing institutions when they still had some residual value. There have been some signs of improvement since the 1985-to-1989 period as illustrated in table 3.2. During the first 2 full years that section 38 was in effect, 1993 and 1994, the rates of loss were 17 and 10 percent, respectively, for a weighted average of 15 percent. While these loss rates are still significant, it is too early to assess section 38’s long-term effectiveness in reducing losses to the insurance funds compared with preceding years. However, it does suggest that the implementation of section 38 alone is likely to provide only limited assurance that bank failures will not have significant effects on the insurance funds. As discussed in chapter 2, the full implementation of section 39 began on October 1, 1996. However, the guidelines and regulations developed by regulators to implement section 39 do little to reduce the degree of discretion regulators exercised from 1980 to 1990. In particular, the safety-and-soundness standards contained in the guidelines are general in nature and do not identify specific unsafe or unsound conditions and practices even though the regulators have already established measures that could have served as a basis for more specific requirements. Moreover, the guidelines and regulations do not require regulators to take corrective action against institutions that do not meet the standards for safety and soundness. In two 1991 reports, we recommended that Congress and regulators develop a formal, regulatory trip wire system that would require prompt and forceful regulatory action tied to specific unsafe banking practices. The trip wire system we envisioned would have been specific enough to provide clear guidance about what actions should be taken to address specified unsafe banking practices and when the actions should be taken. The intent was to increase the likelihood that regulators would take forceful action to stop risky practices before the capital of the bank begins to fall and it is too late to do much about the condition of the bank or insurance fund losses. The trip wire system was also to consist of objective criteria defining conditions that would trigger regulatory action. In contrast, the safety-and-soundness standards, contained in the guidelines developed to implement section 39, as amended, consist of broad statements of sound banking principles that are subject to considerable interpretation by the regulators. For example, the standards for asset quality state that the institution should establish and maintain a system to identify problem assets and prevent deterioration of those assets in a manner commensurate with its size and the nature and scope of its operations. Specifically, the guidelines direct institutions to do the following: conduct periodic asset quality reviews to identify problem assets and estimate the inherent losses of those assets, compare problem asset totals to capital and establish reserves that are sufficient to absorb estimated losses, take appropriate corrective action to resolve problem assets, consider the size and potential risks of material asset concentrations, and provide periodic asset reports containing adequate information for management and the board of directors to assess the level of asset risk. Although the asset quality standards identify general controls and processes the regulators expect institutions to have, the standards do not provide specific, measurable criteria of unsafe conditions or practices that would trigger mandatory enforcement actions. In our 1991 report on deposit insurance reform, we suggested that the classified assets-to-capital ratio could serve as an objective criterion because the ratio is routinely used by bank examiners to identify deteriorating asset quality. For example, we reported that the regulators become increasingly concerned when a bank’s classified assets-to-capital ratio increased to 50 percent or more. Similarly, during the interagency process used to develop the section 39 safety-and-soundness standards, FRS had proposed that the regulators take mandatory enforcement actions when a bank’s classified assets-to-capital ratio reached 75 to 100 percent. However, the regulators decided not to include this requirement after CDRI provided them with the option of omitting quantifiable measures of unsafe and unsound conditions. Without such specific criteria, regulators will continue to exercise wide discretion in determining whether a depository institution’s asset quality deterioration is at a point where enforcement actions are necessary. Similarly, the section 39-based loan documentation standards do not establish specific criteria for regulators to use to assess an institution’s safety and soundness. The regulators believed that general standards provide an acceptable gauge against which compliance can be measured, while at the same time allowing for differing approaches to loan documentation. However, this approach to loan documentation standards differs from the long-standing approach that the regulators have established in their examination manuals. These standards contain specific loan documentation requirements that examiners are to use in assessing the safety and soundness of depository institutions. For example, real-estate construction loan files are to include current financial statements, inspection reports, and written appraisals. Since the section 39 standards do not contain similar documentation requirements, we believe the standards are open to considerable interpretation and do little to limit the wide discretion regulators have in determining whether banks have adequate loan documentation practices. Furthermore, the loan documentation standards do not provide or state a specific level of noncompliance at which enforcement actions will be required. Although it may be difficult to develop quantifiable criteria for making such enforcement decisions, there are various regulatory “rules of thumb” in place that we believe could serve as the basis for triggering mandatory actions. For example, in its 1988 report on the reasons why banks fail, OCC found that banks with loan documentation problems in 15 to 20 percent or more of their loan portfolios were typically operating in an unsafe and unsound manner. As discussed earlier, CDRI amended the section 39 mandate that regulators require a depository institution to file a compliance plan if the institution is found not to be in compliance with the standards. The new provision allows regulators greater flexibility in deciding whether to impose such a requirement. In the July 10, 1995, Notice of Final Rulemaking, the four regulators (OCC, FRS, FDIC, and OTS) stated that they expect to require a compliance plan from any institution with deficiencies severe enough to threaten the safety and soundness of the institution. However, as discussed in the previous section, regulators have not developed quantifiable criteria or other specific guidance for measuring an institution’s compliance with the section 39 safety-and-soundness standards. Therefore, it is not clear how regulators would determine whether an institution’s noncompliance with generally accepted management principles is “severe” enough to warrant regulatory action. In addition, the implementing regulations do not provide any specific criteria for compliance plans beyond those contained in the section 39 legislation. The regulations merely state that compliance plans should identify steps that the institution is to take to correct the identified problems and the time by which the steps are to be taken. In contrast, section 38 and its implementing regulations establish more specific criteria for CRPs. For example, CRPs must specify capital levels that the institution expects to achieve for each year the plans are in effect. In addition, CRPs must show how the institution will comply with any restrictions on its activities under section 38 and the types of businesses and activities in which the institution will engage. Section 38 requires regulators to reject any CRP unless it contains such information, is based on realistic economic assumptions, and would not appreciably increase risk to the institutions. In the absence of similar criteria, there is less assurance that the compliance plans developed under section 39 will consistently result in the prompt remediation of deficiencies. FDICIA contained a number of reforms and provisions that were designed to complement sections 38 and 39. FDICIA’s corporate governance and accounting reform provisions directed depository institutions to improve their corporate governance and the information they report to the regulators. FDICIA also required regulators to revise their risk-based capital standards to ensure that those standards take adequate account of interest rate risk, concentrations of credit, and nontraditional activities. In addition, regulators have stated that their oversight of depository institutions has improved, and they are in the process of modifying their examination approaches to emphasize the monitoring of risk-taking by depository institutions. However, we did not evaluate the effectiveness of these various initiatives because many had not been fully implemented or tested. FDICIA placed a number of new requirements on depository institutions to improve their corporate governance and the information they provide to the regulators. As previously discussed, FDICIA requires all but small (total assets of less than $500 million) depository institutions to submit annual reports to the regulator on the institutions’ financial conditions and management. The report is to include management’s assessment of (1) the effectiveness of the institution’s internal controls and (2) the institution’s compliance with the laws and regulations designated by the regulator. In addition, FDICIA required the institution’s external auditors to report separately on these assertions made by management. Furthermore, FDICIA requires depository institutions to have an independent audit committee composed of outside directors who are independent of institutional management. As we reported in 1993, these new requirements have the potential to significantly enhance the likelihood that regulators will identify emerging problems in banks and thrifts earlier. For example, regulators can use the result of an institution’s management assessments and external auditor’s reviews to identify those areas with the greatest risk exposure. This identification process should allow the regulators to improve the quality and efficiency of their examinations. While these FDICIA requirements may result in the early identification of troubled institutions, they do not ensure that regulators will take consistent supervisory actions to address safety-and-soundness problems before they adversely affect an institution’s capital levels. In response to FDICIA section 305 requirements, regulators have recently undertaken revisions of the risk-based capital standards that they use to implement provisions of section 38. Specifically, regulators have revised or are revising the risk-based capital standards to cover risks associated with concentrations of credit, nontraditional financial products, and interest rate movements. As of September 1996, the revisions to the risk-based capital standards announced by the regulators will not change the capital ratios used for section 38 purposes. Instead, regulators plan to use the examination process to identify institutions that have excessive and poorly managed risk exposure, due to concentrations of credit, nontraditional products, or interest rate risk. Regulators said that they will require such institutions to hold greater levels of capital than those required of other institutions. See appendix I for a more detailed discussion of section 305’s requirements and the regulators’ planned revisions to risk-based capital standards. Regulators have stated that they have learned from their experiences in the 1980s and that their approach to depository institution oversight has changed. The regulators said that they have recognized the need to take proactive steps to prevent institutions from engaging in unsafe and unsound practices. For example, OCC, FRS, and FDIC are developing new examination procedures to better monitor and control bank risk-taking (see app. II). A July 1996 proposal to revise the rating system used by the regulators also reflects the increased emphasis on evaluating an institution’s risk exposure and the quality of its risk management systems. Efforts by the regulators to improve federal oversight through examinations focused on risk management, along with the accounting and corporate governance provisions of FDICIA, could help provide early warning signals of potential safety-and-soundness problems. However, whether this potential for earlier detection will be translated into corrective action is subject to some question because the regulators still have a great deal of discretion under section 39, as amended. Although the section 38 capital standards appear to have played some role in strengthening the condition of the banks and thrifts, other factors have also contributed to this improvement, including lower interest rates and an improving economy. Despite the apparently sound financial condition of the bank and thrift industry, the possibility cannot be ruled out that the current strong performance of the bank and thrift industry is masking management problems or excessive risk-taking that is not being addressed by regulators. For example, the financial press reported in November 1995 and March 1996 that delinquent consumer loans, such as credit card loans, grew considerably during these years and that this growth was partially attributed to lower credit standards. Whether the regulators are more successful in detecting risk management problems and then taking the requisite corrective actions may not be fully known until another downturn in the economy affects the bank and thrift industry. In 1991, Congress enacted FDICIA, in part, because of concerns that the exercise of regulatory discretion during the 1980s did not adequately protect the safety and soundness of the banking system or minimize insurance fund losses. FDICIA’s Prompt Regulatory Action provisions were originally enacted to limit regulatory discretion in key areas and to mandate regulatory responses against financial institutions with safety-and-soundness problems. The implementation of section 38 has provided capital categories and mandated actions that regulators should take if banks or thrifts fall into specific categories. However, section 39, as amended, appears to leave regulatory discretion largely unchanged from what existed before the passage of FDICIA. Sections 38 and 39 provide regulators with additional enforcement tools that they can use to obtain corrective action or close institutions with serious capital deficiencies and/or safety-and-soundness problems. These provisions include the enforcement tool that allows regulators to remove bank officials believed to be the cause of the institution’s problems as well as other actions intended to stop the institution from engaging in risky practices. Moreover, section 38 appears to have encouraged institutions to raise additional equity capital and should help prevent capital-deficient institutions from compounding losses. Despite such benefits, severely troubled institutions may not be subject to mandatory restrictions and supervisory actions under section 38 due to its reliance on capital as the basis for regulatory intervention. In addition, section 39 does not require regulators to take actions against poorly managed institutions that have not yet reached the point of capital deterioration. Legislative and regulatory changes have resulted in the guidelines’ taking the form of broad statements of general banking principles rather than as specific measures of unsafe and unsound conditions. Furthermore, regulators have not established criteria for determining when a institution is in noncompliance with the guidelines. The implementation of FDICIA’s other provisions and various initiatives undertaken by the regulators to improve their examination process may help to increase the likelihood that regulators will take prompt and corrective regulatory action. FDICIA’s accounting and supervisory reforms provide a structure to strengthen corporate governance and to facilitate early warning of safety-and-soundness problems. In addition, regulators have stated that their approach to supervision has changed since the 1980s, and they are developing new examination procedures to be more proactive in monitoring and assessing bank risk-taking. However, we did not evaluate the effectiveness of these initiatives because many of them have not been fully implemented or tested. Therefore, at present, it is difficult to determine if these initiatives will result in the earlier detection of safety-and-soundness problems and, if so, whether regulators will take strong and forceful actions early enough to prevent or minimize future losses to the insurance funds due to failures. In its comments on our report, OCC agreed with our conclusion that sections 38 and 39 may not always result in prompt and corrective regulatory action. Nonetheless, OCC believes that FDICIA’s combination of section 38 mandatory restrictions and the regulatory discretion retained under section 39 allows regulators to tailor their supervision to suit an institution and its particular problems. The Federal Reserve Board of Governors stated that it had no formal comments but that the report appeared to accurately describe the Federal Reserve’s policies, procedures, and practices with respect to the implementation of FDICIA’s Prompt Regulatory Action provisions, as amended. OTS stated that section 38 effectively encourages institutions to avoid becoming or remaining undercapitalized. OTS emphasized that the section 39 standards are untested, and it supported the flexibility built into section 39. OTS believes that existing discretionary supervisory and enforcement tools are adequate to deal with most safety-and-soundness issues, apart from capital. FDIC also supported the discretionary and flexible nature of the section 39 safety-and-soundness standards. FDIC pointed out that the overwhelming number of comments that the regulators received on the section 39 standards were in favor of general rather than specific standards. FDIC stated that the section 39 standards adopted by the regulators minimize regulatory burden while recognizing that there is more than one way to operate in a safe-and-sound manner. We do not disagree that there is a need for some degree of regulatory discretion. Rather, we see the issue as one of striking a proper balance between the need for sufficient regulatory discretion to respond to circumstances at a particular institution and the need for certainty for the banking industry about what constitutes an unsafe or unsound condition and what supervisory actions would be expected to result from those conditions. The subjective nature of the standards continues the wide discretion that regulators had in the 1980s over the timing and severity of enforcement actions. Such discretion resulted in the regulators’ not always taking strong actions early enough to address safety-and-soundness problems before they depleted an institution’s capital. However, we note that the implementation of FDICIA along with various regulatory initiatives undertaken since the passage of FDICIA may help in the earlier detection of institutions with safety-and-soundness problems. These initiatives, along with the regulators’ willingness to use their various enforcement authorities—including sections 38 and 39—to prevent or minimize potential losses to the deposit insurance funds, will be instrumental in determining whether the proper balance between discretion and certainty has been attained. | GAO reviewed the Federal Reserve System's (FRS) and the Office of the Comptroller of the Currency's (OCC) efforts to implement the Federal Deposit Insurance Corporation Improvement Act of 1991 (FDICIA) prompt regulatory action provisions and the impact of those provisions on federal oversight of depository institutions. GAO found that: (1) regulators have taken the required steps to implement FDICIA prompt regulatory action provisions, but have had to use the additional enforcement powers granted by the provisions against a relatively small number of depository institutions; (2) the improved financial condition of banks and thrifts has allowed them to build their capital levels to the point where only a few institutions were considered undercapitalized according to section 38 standards; (3) OCC and FRS generally took prescribed regulatory actions against the 61 undercapitalized banks reviewed; (4) as of September 1996, regulators had not used their section 39 authority; (5) the final two safety and soundness standards, asset quality and earnings, required to fully implement section 39 became effective on October 1, 1996; (6) the guidelines and regulations issued to date by regulators to implement section 39 do not establish clear, objective criteria for what would be considered unsafe and unsound practices or conditions or link the identification of such conditions to specific mandatory enforcement actions; (7) other FDICIA provisions and initiatives recently announced by regulators should help in the early identification of depository institutions with safety and soundness problems; and (8) the success of these provisions and initiatives will be determined by the regulators' willingness to use their enforcement powers early enough to prevent or minimize losses to the deposit insurance funds. |
Ex-Im, the official ECA of the United States, is an independent agency operating under the Export-Import Bank Act of 1945, as amended. Its mission is to support the export of U.S. goods and services overseas, thereby supporting U.S. jobs. Official ECAs are organizations that provide export credits with explicit government backing, where either the government or the government-owned ECA assumes all or a portion of the risk. Export credits are financing arrangements designed to mitigate risks to buyers and sellers associated with international transactions. Buyers and sellers in international transactions face unique risks, such as foreign exchange risk, difficulties in settling disputes when damages to shipments occur, or instability in the buyer’s country. For these reasons, private lenders may be reluctant to finance a buyer’s purchase of foreign goods or finance a seller’s operations. Export credit products are meant to facilitate international transactions by mitigating these risks. An international agreement, the Organisation for Economic Cooperation and Development (OECD) Arrangement on Officially Supported Export Credits (the OECD Arrangement), governs various aspects of U.S. and other member countries’ ECAs. The OECD Arrangement aims to provide a framework for the use of officially supported export credits; promote a level playing field, where competition is based on the price and quality of the exported goods and not the financial terms provided; and provide transparency over programs and transactions. For example, the OECD arrangement sets minimum transaction fees. Participants include Australia, Canada, the European Union, Japan, South Korea, New Zealand, Norway, Switzerland, and the United States. The OECD Arrangement applies to officially supported export credits with repayment terms of 2 years or more. Congress has placed specific requirements on Ex-Im’s operations. For example, Ex-Im’s charter states that it should not compete with the private sector. Rather, Ex-Im’s role is to assume the credit and country risks that the private sector is unable or unwilling to accept. In addition, Ex-Im must submit annual reports to Congress on its actions to provide financing on a competitive basis with other ECAs, and to minimize competition in government-supported export financing. Furthermore, Ex- Im must make available at least 20 percent of its authorized aggregate loans, loan guarantees, and insurance (export financing) each fiscal year for the direct benefit of small businesses. Congress also has given Ex-Im instructions on the share of its financing for environmentally beneficial exports, including renewable energy, and to expand the promotion of its financing in sub-Saharan Africa. Ex-Im offers a number of export financing products, including direct loans, loan guarantees, and export credit insurance. Ex-Im makes fixed-rate loans directly to international buyers of goods and services. These loans can be short-term (up to 1 year), medium-term (more than 1 year up to 7 years), or long-term (more than 7 years). Ex-Im also guarantees loans made by private lenders to international buyers of goods or services, promising to pay the lenders if the buyers default. Like direct loans, loan guarantees may be short-, medium-, or long-term. Additionally, Ex-Im provides export credit insurance products that protect the exporter from the risk of nonpayment by foreign buyers for commercial and political reasons. This allows U.S. exporters the ability to offer foreign purchasers the opportunity to make purchases on credit. Credit insurance policies can cover a single buyer or multiple buyers and can be short- or medium- term. Insurance policies are also available to cover lenders and exporters that finance purchases by foreign buyers. Ex-Im’s short-term insurance covers a wide range of goods, raw materials, spare parts, components, and most services on terms up to 180 days. Medium-term insurance policies protect longer-term financing to international buyers of capital equipment or services, covering one or a series of shipments. Financing under medium-term insurance policies generally can extend up to 5 years. Some of Ex-Im’s short-term products are geared toward U.S. small businesses that have the potential to export but lack sufficient funds to support export efforts and include direct loans and loan guarantees to provide these businesses with working capital. Working capital loans are fixed-rate loans that provide exporters with 6- or 12-month revolving lines of credit. Working capital guarantees generally cover 90 percent of the principal and interest on a loan made to an exporter by a private lender. The guarantees are typically 1 year, but can extend up to 3 years and be used on a single transaction or on a revolving basis. Ex-Im delegates the authority for underwriting most of these transactions directly to Ex-Im- approved private-sector lenders. Ex-Im’s long-term products are often used in project finance transactions, what Ex-Im terms “structured finance” transactions, and aircraft transactions. These transactions involve complicated financing arrangements, and Ex-Im has separate divisions to handle them. These transactions also generally involve a direct loan or loan guarantee and their value is usually greater than $10 million. Project finance is an arrangement in which Ex-Im lends to newly created project companies in foreign countries and looks to the project’s future cash flows as the source of repayment instead of relying directly on foreign governments, financial institutions, or established corporations for repayment of the debt. The projects involve a large number of contracts for completion and operation. Project finance transactions have repayment terms up to 14 years (18 years for renewable energy transactions) and typically include the financing of development of a new facility in a foreign country, such as a factory or power plant, or significant facility or production expansions. Most of Ex-Im’s project finance transactions have been oil and gas and power sector projects. In structured finance transactions, Ex-Im provides direct loans or loan guarantees to existing companies located overseas based on these companies’ balance sheets plus credit enhancements, such as escrow or reserve accounts, subject to Ex-Im’s control; special insurance requirements; and letters of credit pledged to Ex-Im through a bank or other third party. Structured finance transactions generally have repayment terms of 10 years (12 years for power transactions). Among others, Ex-Im has completed structured transactions for oil and gas projects and air traffic control, telecommunications, and manufacturing entities. Finally, Ex-Im provides long-term direct loans and loan guarantees that support the purchase of aircraft. Ex-Im uses external advisers to assist in arranging project finance, structured finance, and aircraft transactions. These advisers can include financial, legal, technical, insurance, market, and environmental consultants. Ex-Im faces multiple risks when it extends export credit financing. These risks include credit, political, market, concentration, foreign-currency, and operational risks, which are defined as follows: Credit risk. The risk that an obligor may not have sufficient funds to service its debt or be willing to service its debt even if sufficient funds are available. Political risk. The risk of nonrepayment resulting from expropriation of the obligor’s property, war, or inconvertibility of the obligor’s currency into U.S. dollars. Market risk. The risk of loss from declining prices or volatility of prices in the financial markets. Market risk can arise from shifts in macroeconomic conditions, such as productivity and employment, and from changes in expectations about future macroeconomic conditions. Concentration risk. Risk stemming from the composition of a credit portfolio. Concentration risk comes into being through an uneven distribution of credits within a portfolio. Ex-Im faces three types of concentration risk: Industry concentration. The risk that events could negatively affect not only one obligor but also many obligors in the same industry simultaneously. Geographic concentration. The risk that events could negatively affect not only one obligor but many obligors simultaneously across a country or region. Obligor concentration. The risk that defaults from a small number of obligors will have a major adverse impact on the portfolio because they account for a large share of the portfolio. Foreign-currency risk. The risk of loss as a result of appreciation or depreciation in the value of a foreign currency in relation to the U.S. dollar in Ex-Im transactions denominated in that foreign currency. Operational risk. The risk of loss resulting from inadequate or failed internal processes, people, and systems, or from external events. In 1990, to more accurately measure the cost of federal credit programs, Congress enacted FCRA, which requires agencies that provide domestic or international credit, including Ex-Im, to estimate and request appropriations for the long-term net losses, or subsidy costs, of their credit activities. Credit programs incur subsidy costs when estimated payments by the government (such as loan disbursements or claims paid on defaulted loans) exceed estimated payments to the government (such as principal repayments, fees, interest payments, and recoveries), on a net present value basis over the life of the direct loan or loan guarantee, excluding administrative costs.when the present value of estimated payments by the government exceeds the present value of estimated payments to the government (collections). When credit programs have a positive subsidy cost, they require appropriations. Conversely, negative subsidy programs are those in which the present value of estimated collections is expected to exceed the present value of estimated payments. Credit programs have a positive subsidy FCRA requires that agencies have budget authority to cover credit subsidy costs before entering into credit transactions. For their annual appropriation requests, agencies estimate credit subsidy costs by cohort. To estimate their subsidy costs, credit agencies estimate the future performance of direct loans and loan guarantees. Agency management is responsible for accumulating relevant, sufficient, and reliable data on which to base these estimates. To estimate future credit performance, agencies generally have models that include assumptions about defaults, prepayments, recoveries, and the timing of these events and are based on the nature of their credit programs. In addition to assumptions based on agencies’ programs, agencies also must incorporate economic assumptions included in the President’s budget for credit subsidy calculations. An agency’s credit subsidy costs can be expressed as a rate. For example, if an agency commits to guarantee loans totaling $1 million and has estimated that the present value of cash outflows will exceed the present value of cash inflows by $15,000, the estimated credit subsidy rate is 1.5 percent. Under FCRA, agencies generally must produce annual updates of their credit subsidy estimates—known as reestimates—of each cohort based on information about the actual performance and estimated changes in future credit performance. This requirement reflects the fact that estimates of credit subsidy costs can change over time. Beyond changes in estimation methodology, each additional year provides more historical data on credit performance that may influence estimates of the amount and timing of future cash flows. Economic assumptions also can change from one year to the next, including assumptions on interest rates. When reestimated credit subsidy costs exceed agencies’ original credit subsidy cost estimates, the additional subsidy costs are not covered by new appropriations but rather are funded from permanent, indefinite budget authority. In addition to estimating credit subsidy costs for budgetary purposes, Ex- Im calculates future credit losses for its annual audited financial statements. Ex-Im’s financial statements are prepared according to generally accepted accounting principles in the United States applicable to federal agencies. These principles require Ex-Im to follow Federal Accounting Standards Advisory Board (FASAB) guidance when establishing loss allowances for direct loans and loss reserves for loan guarantees or insurance transactions to cover future credit losses. Ex-Im business activities grew substantially in recent years. From 1990 through 2012, Ex-Im’s financial exposure grew by more than 250 percent (or about 120 percent after adjusting for inflation), with most of the growth occurring after 2008 (see fig. 1). From 1990 through 2007, Ex-Im’s exposure grew from about $30 billion to $57.5 billion—an average annual increase of just under 4 percent. From 2008 through 2012, Ex-Im’s exposure rose from $58.5 billion to $106.6 billion—an average annual growth rate of more than 16 percent. Most of Ex-Im’s recent growth occurred through its long-term loan guarantee and direct loan products. Overall, annual Ex-Im authorizations rose from $14.4 billion in 2008 to $35.8 billion in 2012 (see fig. 2). Over the same period, annual authorizations for long-term products grew from $8.5 billion to $26.6 billion—a more than three-fold increase—and accounted for almost 75 percent of the authorizations Ex-Im made in 2012. In particular, annual authorizations for new project and structured finance transactions, almost all of which used long-term loan guarantees and direct loans, increased from $1.9 billion in 2008 to $12.6 billion in 2012, or almost half of the long-term authorizations that Ex-Im made in 2012. By region, annual authorizations grew most rapidly in Asia and Oceania (see fig. 3). For example, Ex-Im authorizations for export financing to Asia grew from $3.6 billion in 2008 to $13.5 billion in 2012, an increase of about 275 percent. Overall, Asia accounted for almost 38 percent of Ex- Im’s authorizations in 2012. Financing for exports to Oceania, which accounted for a smaller portion (about 9 percent) of Ex-Im’s 2012 authorizations, rose even more sharply—from about $1 million in 2008 to $3.2 billion in 2012. By industry sector, financing for aircraft industry exports was the single largest source of authorizations in recent years, but authorizations in other sectors grew more quickly (see fig. 4). Aircraft-related authorizations grew from $5.7 billion in 2008 to $11.9 billion in 2012—an increase of about 110 percent—and accounted for about one-third of Ex- Im’s authorizations in 2012. Over the same period, authorizations in the service sector rose more than 20-fold—from $229 million to $5.3 billion— and represented 15 percent of Ex-Im’s authorizations in 2012. Additionally, authorizations increased by more than 400 percent in the power utilities sector (from $0.6 billion to $3.1 billion) and by more than 130 percent in the oil and gas mining sector (from $1.8 billion to $4.2 billion). The power utilities sector and oil and gas mining sector accounted for 9 percent and 12 percent of Ex-Im’s 2012 authorizations, respectively. Ex-Im officials and all of the representatives from industry trade associations and research groups we interviewed agreed that reduced availability of private-sector financing after the 2007-2009 financial crisis was the leading factor contributing to increased demand for Ex-Im financing. For example, officials from Ex-Im’s Policy and Planning Group and industry representatives told us that the growing reluctance of commercial banks to provide export financing in the wake of the financial crisis was a primary driver of Ex-Im’s growth. They explained that the financial crisis diminished the availability of commercial lending and that Ex-Im provided financing to fill the void. Officials from the foreign ECAs we contacted described similar effects on their business activity. For example, officials from the ECAs in Canada and France explained that as commercial banks withdrew from the trade finance market, their own export credit activities grew as they made efforts to fill the resulting gap. Officials from Ex-Im, industry trade and research organizations, and other ECAs we interviewed, also said that the ongoing and future implementation of international banking standards further limited private- sector financing, contributing to growth in ECA activities. For example, officials cited the Basel Committee on Banking Supervision “Basel III” standards for banking institutions that include risk-based capital and other requirements. As of December 2012, U.S. regulators were preparing to finalize rules for implementation of these standards. Ex-Im’s 2011 report to Congress on export credit competition notes that the transition to Basel III practices would require most banks to increase prices for export and other types of financing, and consequently, direct loans from ECAs became the preferred mechanism for some long-term trade deals.five representatives from industry trade associations and research groups with whom we discussed this issue agreed that the ongoing implementation of Basel regulations could further constrain private-sector export financing in the already tightened lending environment following the financial crisis. Similarly, officials from Canada’s ECA noted that their business increased partly in response to banks’ plans for implementing Basel requirements. Commercial bank lending trends in the United States and Europe broadly demonstrate the reduced availability of private-sector financing during the 2007-2009 financial crisis, including for U.S. small businesses. As shown in figure 5—which shows the percentage of lenders that tightened or eased credit standards at different points in time—credit standards generally tightened during the financial crisis. Following the crisis, more U.S. banks began to ease rather than tighten credit standards, while more banks in the euro area continued to tighten standards, though not as dramatically as they had during the crisis. In emerging markets, following the onset of the 2007-2009 financial crisis, perceptions of risk and the cost of credit in corporate bond markets also jumped—dramatically for higher-risk borrowers. As shown in figure 6, risk premiums for corporate bonds spiked in late 2008 and early 2009 and remain above precrisis levels. Ex-Im and officials from industry trade associations and foreign ECAs noted the significance of Ex-Im’s direct loan product to Ex-Im’s recent growth. For example, in its 2011 report to Congress on export credit competition, Ex-Im noted the competitive advantage that direct loans—a product not offered by some other ECAs—gave Ex-Im and other ECAs with similar products. As previously shown in figure 2, long-term direct loan authorizations grew from $356 million in 2008 to almost $12 billion in 2012. Representatives from industry trade associations noted that Ex- Im’s ability to offer direct loans helped Ex-Im to fill the gap in private- sector lending following the financial crisis and implementation of more stringent banking regulations. Other ECAs also pointed to direct lending as contributing to their ability to fill financing gaps following the recent financial crisis. Of the four foreign ECAs we interviewed, two (Canada and Japan) had existing direct lending capability prior to the 2007-2009 financial crisis. Both agreed that their capability to make direct loans was an important factor in their ability to fill trade financing gaps. Ex-Im officials and representatives from industry trade associations and research groups identified other possible drivers of Ex-Im’s business that may have contributed to the recent growth trend. These include increased demand for U.S. goods and services from emerging markets, increased production by significant Ex-Im customers, Ex-Im’s outreach efforts to small businesses and key export markets, and Ex-Im’s response to growing competition from foreign ECAs. First, Ex-Im has suggested that demand for U.S. exports, particularly from industrializing emerging markets (such as in Asia), adds to demand for Ex-Im services. For example, as shown previously in figure 3, Ex-Im authorizations in Asia grew more than 275 percent from 2008 through 2012. While Ex-Im activity in some emerging markets did grow in recent years, we did not find evidence of a positive correlation between Ex-Im activity and U.S. exports. For example, Ex-Im’s highest growth came in 2009, when total U.S. exports and U.S. exports to emerging markets were falling. Second, Ex-Im officials also said that increased production by Boeing, Ex- Im’s primary aircraft exporter, contributed to authorization growth. Officials emphasized that while Ex-Im’s significant activity in the airline sector has contributed to Ex-Im’s recent growth, they did not expect growth in the aircraft sector to continue at the same high rate, because the commercial market for aircraft finance is beginning to recover and implementation of a 2011 international agreement among OECD ECAs may result in ECA financing being less competitive than commercial financing.authorizations—the single largest industry Ex-Im supports. Nonetheless, for 2012, aircraft represented 33 percent of new Third, according to Ex-Im officials, Ex-Im’s efforts to offer small business products and increase awareness of available export financing assistance also may have been a factor in its recent growth. Ex-Im recently launched several new small business products and opened four new regional Export Finance Centers throughout the United States to support small business exporters. These efforts stem from Ex-Im’s mandate to make available at least 20 percent of annual authorizations to small businesses. Ex-Im officials explained that outreach efforts are important because meeting the 20 percent small business requirement has been increasingly difficult as Ex-Im’s overall portfolio has grown. In addition to its small business outreach, Ex-Im has been identifying private-sector and public- sector buyers, financial institutions, and key governmental agencies for each of nine foreign “key markets” where it expects U.S. export growth to be strongest in the near future. In its 2010-2015 strategic plan, Ex-Im cites these country outreach efforts as a way to focus limited Ex-Im resources in areas with high potential for furthering the agency’s mission of supporting exports and the jobs they provide. Ex-Im selected the key markets based on a number of factors, including the size of the export market for U.S. companies, projected economic growth, anticipated infrastructure demand, and the need for Ex-Im financing. According to Ex- Im officials, some of these efforts already have produced new transactions in markets such as Brazil. Lastly, according to Ex-Im and representatives from industry trade associations and research groups, Ex-Im’s activity may continue to grow in response to increased competition from other ECAs, particularly those in non-OECD countries, but none of the other ECAs we spoke with suggested that this was a significant factor. Ex-Im’s 2012 Annual Report emphasizes the importance of its role in ensuring that U.S. exporters have a fair opportunity to compete with foreign exporters. While the OECD Arrangement governs various aspects of U.S. and other member ECAs’ activities, non-OECD ECAs sometimes offer financing terms more favorable than the terms permitted under the OECD Arrangement. Ex-Im can respond to foreign competition in export financing by notifying OECD that it is meeting terms offered by other ECAs in order to preserve U.S. exporters’ competitiveness. But, because these ECAs are not beholden to the transparency requirements of the OECD Arrangement, it can be difficult to confirm the terms and conditions of non-OECD ECA export financing transactions. Some representatives from industry trade associations and research groups we interviewed agreed that Ex-Im’s response to competition from non-OECD ECAs, particularly China, may have been, and likely would continue to be, a significant factor in increased Ex-Im activity. Others, such as officials from the Berne Union (a worldwide association for export credit and investment insurers) noted that while competition based on financing terms might be a significant determinant for a small number of international trade transactions, importers primarily base purchase decisions on the equipment or services that best meet their needs. None of the officials from the four foreign ECAs we interviewed suggested that increased competition was a significant factor in their own or Ex-Im’s activity growth. Ex-Im uses a loss estimation model to estimate credit subsidy costs and loss reserves and allowances. This model accounts for various risks and underwent a major update in 2012, but opportunities exist for additional improvements. Ex-Im’s product fees account for credit and political risk and are guided by international agreements and internal analyses. Ex-Im uses a loss estimation model to build the agency’s credit subsidy estimates in the President’s budget as well as for calculating loss reserves and allowances reported in Ex-Im’s annual financial statements. The model includes quantitative and qualitative factors to account for various risks facing the agency. In 2012, Ex-Im made several adjustments to the model to better account for uncertainty associated with a growing portfolio and changing economic conditions. However, opportunities exist for additional improvements. The model considers historical data, as well as qualitative information, to estimate loss rates on Ex-Im’s transactions—that is, the percentage loss that Ex-Im can expect for each dollar of export financing. Mathematically, Ex- the loss rate is the probability of default times the loss given default.Im’s current model uses historical information on loan guarantee and insurance transactions authorized from 1994 through 2011.information includes the default and loss history of those transactions as well as variables that are predictive of defaults and losses, including transaction amount and length, obligor type, product type, and “risk rating”—a numerical risk score that Ex-Im assigns to each transaction. The model calculates a loss rate for each Ex-Im risk rating and product type. As previously noted, risk ratings are a key variable in the loss model. Ex- Im underwriters assign the ratings, which are based on assessments of credit, political, and market risks. risky) to 11 (most risky). These risk ratings are determined partly through the Interagency Country Risk Assessment System (ICRAS), a working group that includes Ex-Im and other federal agencies involved in providing international credit. According to Ex-Im, for each country, ICRAS ratings are based on entities’ (1) ability to make payments as indicated by relevant economic factors, and (2) willingness to pay as indicated by payment record and political and social factors. There are two types of ICRAS ratings—one for foreign government (sovereign) borrowers and one for private-sector entities in foreign countries. For transactions with foreign governments, Ex-Im officials apply the ICRAS sovereign-risk rating. For transactions with private-sector entities, Ex-Im officials assign risk ratings based on the ICRAS private-sector rating and potentially other information such as obligor financial statements and ratings of the obligors by credit rating agencies. Ex-Im does not assign risk ratings to short-term insurance in multibuyer transactions or to working capital transactions. All long- and medium-term transactions and short-term insurance in single-buyer transactions are assigned risk ratings. The loss rates produced by the model are used to estimate future cash flows, which, in turn, are used to determine credit subsidy costs contained in the President’s budget and to calculate loss reserves and allowances reported in Ex-Im’s annual financial statements. To estimate the subsidy costs of future transactions as part of the annual budget process, Ex-Im uses the loss rates to help determine cash inflows (such as repayments, fees, and recoveries) and outflows (such as claims) for the book of business it expects in the upcoming year. Because the cash inflows and outflows occur in the future, they must be discounted to determine their net present values. To do this, OMB guidance requires Ex-Im to enter cash flows into OMB’s credit subsidy calculator, which generates the original credit subsidy cost estimate for that book of business. In accordance with FCRA, the discount rates in the OMB credit subsidy calculator are based on interest rates for U.S. Treasury securities. Ex-Im also uses the estimated future cash flows to calculate loss reserves or allowances—financial reporting accounts for estimated losses—for each transaction at authorization. The total loss reserves and allowances are reported in Ex-Im’s annual financial statements. Each year, Ex-Im adjusts the loss reserve or allowance amount for each transaction using updated estimates of future cash flows, which consider the impact of actual credit performance and estimated changes in future credit performance. In addition to the risks discussed previously, the loss model also accounts for the foreign-currency risk Ex-Im faces from its transactions denominated in a foreign currency. In 2012, Ex-Im authorized about $1.7 billion in guarantees denominated in a foreign currency, or about 5 percent of its total authorizations for that year. According to an Ex-Im official, the loss model uses a methodology that captures the cost of buying a foreign currency at a particular time in the future. Therefore, Ex-Im factors this cost into the credit subsidy cost and the related loss reserve or allowance at the time it authorizes a transaction denominated in a foreign currency and updates it during the reestimate process. In addition, an Ex-Im official told us that Ex-Im adjusts its loss reserves monthly to reflect changes in currency exchange rates. Ex-Im adjusts the loss estimation model annually to enhance the reliability of loss rates used to estimate subsidy costs and calculate loss reserves and allowances. In 2012, Ex-Im made several adjustments to both implement recommendations from external auditors and the Ex-Im IG and to make the model more flexible for the various types of transactions in its portfolio. Among other things, Ex-Im changed how it used its historical dataset and added several qualitative factors. Due to data limitations, Ex-Im’s model does not control for the age of transactions in estimating the probability of default, potentially reducing Ex-Im changed how it used the dataset the precision of the estimates.underlying the loss model, which helped to mitigate this limitation. Specifically, to help avoid underestimating the probability of default, Ex-Im removed transactions authorized in 2012 from the dataset because these transactions generally did not have enough time to default. Ex-Im also excluded any long-term transactions that were within 3 years of the obligor’s first payment. In addition, Ex-Im removed all transactions that had not been disbursed as of June 30, 2012, because some transactions are never disbursed and therefore never have the opportunity to default. According to Ex-Im, these changes were made so the dataset more accurately reflected the nature of its defaults. An Ex-Im report on the 2012 changes to the loss model stated that these adjustments increased the percentage of defaulted transactions in the dataset—from 14 percent to 19 percent for loan guarantee transactions and from 11 percent to 14 percent for insurance transactions. From a dollar perspective, the adjustments increased the loss rate for transactions in the dataset from 0.9 percent to 1.2 percent for loan guarantees and from 6.2 percent to 6.9 percent for insurance transactions. Consistent with audit recommendations and industry best practices, Ex- Im also incorporated five qualitative factors into the loss model in 2012 to adjust for circumstances that may cause estimated losses to differ from historical experience. Ex-Im added these factors in recognition of the substantial growth in Ex-Im’s portfolio in recent years and of the potential differences between its historical loss experience—on which the quantitative part of the model is based—and future loss experience. According to Ex-Im, the five qualitative factors enhance the reliability of the model by better accounting for uncertainty in loss expectations. Four of the five qualitative factors Ex-Im added to the model in 2012 increased the estimated loss rate, and therefore increased the related loss reserve and allowance amounts for some transactions. The five qualitative factors are as follows: Minimum loss rate. Ex-Im established minimum loss rates for products that historically had very low losses and therefore would have very low estimated loss rates based solely on historical data. According to Ex-Im, they added this factor to recognize that although some segments of the data may have low (or zero) historical loss rates Ex- Im should not forecast no losses in the future. The minimum loss rates affected sovereign and other public-sector transactions of long terms with good risk ratings and some short-term insurance transactions with good risk ratings. Ex-Im’s 2012 report on the loss model stated that the addition of this qualitative factor increased loss reserves and allowances by 2 percent. Global economic risk. This factor attempts to account for some market risks associated with changes in international economic and business conditions that may affect Ex-Im’s portfolio and make future losses differ from historical losses. First, Ex-Im uses a 1-year forecast from Moody’s of default rates on speculative-grade corporate bonds to predict an Ex-Im default rate.Im’s historical default experience. If the estimated default rate is greater than Ex-Im’s historical experience, Ex-Im increases its loss estimate in proportion to the difference between the estimated default rate and its historical experience. For 2012, this factor did not result in an adjustment to Ex-Im’s loss model. Ex-Im then compares this rate with Ex- Portfolio concentration risk, including the three factors of region concentration, industry concentration, and obligor concentration in the aircraft portfolio. Ex-Im added these three qualitative factors to adjust loss rates to account for uncertainties associated with growing concentrations in its portfolio. Conceptually, the region and industry concentration factors treat each growing region and industry as if it were an entity that was issuing debt—making the entity more risky and potentially lowering its credit rating. Ex-Im used certain credit rating agency methodologies to develop synthetic ratings for each “entity” and used these ratings to adjust loss rates for transactions in the corresponding regions or industries. Ex-Im’s 2012 report on the loss model stated that the addition of the region concentration factor and the industry concentration factor increased loss reserves and allowances by 7 percent and 13 percent, respectively. Ex-Im also developed a concentration factor for obligors in the aircraft sector, which accounted for 46 percent of Ex-Im’s exposure at the end of 2012. For example, this factor increases the loss rate for an aircraft transaction if the estimated loss given default (based only on the market value of the aircraft and the transaction amount) is larger than the loss given default predicted by the model (which incorporates other factors). Ex-Im indicated that the addition of the obligor concentration factor increased loss reserves and allowances by 0.8 percent. Notwithstanding recent enhancements to the loss estimation model, opportunities exist for additional improvements to the model, as well as Ex-Im’s model development and validation processes. Ex-Im’s independent financial statement auditor and Ex-Im’s IG have made recent recommendations designed to improve Ex-Im’s loss modeling process. In conjunction with its audit of Ex-Im’s 2012 financial statements, Ex-Im’s independent financial statement auditor reviewed the loss model and found it to be reasonable overall. However, the auditor recommended additional improvements to Ex-Im’s loss modeling process, including (1) considering enhancements to the adjustment for global economic risk by using economic data or related indicators that would better predict the overall impact to the portfolio; (2) conducting analysis to determine events that trigger defaults; (3) identifying and monitoring industry-specific drivers of risk; and (4) having an independent department or service provider test the accuracy of the model. Ex-Im officials stated they would take these recommendations into consideration as they update the model for the next fiscal year. Additionally, in its September 2012 report, the Ex- Im IG recommended that Ex-Im design and implement a formal governance framework that defines roles and responsibilities for financial models and includes policies and procedures for validating models. Ex-Im agreed with the recommendation and noted that it has begun developing a formal governance framework for financial models. Also, Ex-Im said that it will conduct external validations of future financial models. As previously discussed, Ex-Im incorporated a qualitative factor into the model to adjust the loss estimates to account for uncertainty related to potential changes in global economic conditions. The factor uses a 1-year forecast of bond defaults to make this adjustment. According to Ex-Im, the bond default rates originally forecasted each year correlated with Ex- Im’s observed default rates between 1994 and 2011. In addition, Ex-Im officials said the 1-year forecast was appropriate because Ex-Im will use subsequent 1-year forecasts in annual updates to the loss model. However, a 1-year forecast may not capture the uncertainty associated with Ex-Im’s longer-term transactions, and the use of subsequent short- term forecasts does not address this limitation. FASAB guidance for federal credit agencies states that agencies should develop cash flow projections for their transactions based upon the best available data. One-year forecasts may not represent the best available data for transactions that span multiple years. As a result, Ex-Im may not be making the appropriate adjustment to the estimated future losses, which could lead to underestimation of loss rates, credit subsidy cost estimates, and the related loss reserves and allowances for financial reporting purposes. Ex-Im’s fees for medium- and long-term products account for the credit and political risk associated with each transaction and are guided in large part by the OECD Arrangement, which establishes guidelines for determining “minimum premiums”—fees to cover the risk of not being repaid—and minimum interest rates that participant ECAs charge. Ex- Im officials told us that the “exposure fee” they charge is generally the minimum premium required by the OECD Arrangement, but that the OECD Arrangement allows them to increase this fee if they deem that the minimum premium does not cover the risk of a transaction. Ex-Im also charges the minimum interest rate required by OECD, but can charge above that rate. Ex-Im’s pricing structure for medium- and long-term products (about 85 percent of Ex-Im’s exposure) includes the following: Exposure fees. These fees cover the credit and political risks associated with a direct loan, loan guarantee, or insurance transaction. Ex-Im generally sets these fees at the level of the OECD minimum premium. Commitment fees. These fees are a flat percentage per year of the undisbursed portion of a direct loan or loan guarantee that Ex-Im charges to encourage the obligor’s use of the credit. These fees are not meant to cover the risk of nonrepayment and are not guided by the OECD Arrangement. Interest rates on direct loans.minimum interest charge of 1 percentage point above the U.S. Treasury rate for a security of comparable length. The OECD Arrangement specifies a To determine the OECD minimum premium for a direct loan, loan guarantee, or insurance transaction, Ex-Im must take several aspects of the transaction into account, including the following: Obligor’s country. OECD established a system for classifying the risk associated with transactions in different countries. OECD classifies the countries using a scale from 0 (least risky) to 7 (most risky), and Ex-Im applies the relevant country classification for each These classifications take into account risks associated transaction.with a country’s financial, economic, and political situation, as well as the historical payment experience of ECAs that are members of OECD and that have provided credit in the country. Obligor’s credit risk. OECD established a framework for classifying obligors based on credit risk and provides guidelines to assist participant ECAs in doing so. Using the guidelines, Ex-Im places each obligor into one of eight classifications, which range from better-than- sovereign credit quality (least risky) to weak credit quality (most risky). Ex-Im uses the obligor’s recent financial information and other information, such as the obligor’s industry position and ratings by credit rating agencies, to assign a credit risk classification. Other characteristics affecting the risk of nonrepayment. In determining minimum premiums, Ex-Im also must take into account the type of export financing product, the length of the transaction, and the percentage of the overall credit amount for which Ex-Im is responsible in the event of nonpayment. Additionally, the application of risk-mitigation techniques, such as obligor credit enhancements, reduces the minimum premium. For specific types of medium- and long-term transactions, different procedures apply. For example, for obligors in high-income OECD countries, high-income euro area countries, or countries with an OECD country risk classification of 0, the OECD Arrangement requires ECAs to set fees based on available market information and the characteristics of the underlying transaction. This is achieved by using prices of certain comparable private-sector products, or “market benchmarks,” to help set fees. The OECD Arrangement specifies seven products that participant ECAs may use for this purpose, including certain corporate bonds and certain credit default swaps. According to OECD, the level of country risk is considered negligible for these countries, and the credit risk associated with transactions in these countries is predominantly related to the credit risk of the obligor. In addition, Ex-Im sets fees for its aircraft transactions according to a separate OECD agreement, updated in 2011, This agreement provides guidance on specifically for the aircraft sector.the commitment fees to be charged in aircraft transactions. According to Ex-Im, when the updated agreement is fully implemented in 2013, Ex-Im’s fees for these transactions will rise substantially. The fee structures for Ex-Im’s short-term products are not covered by the OECD Arrangement or any other international agreements and differ by export financing product, as follows: Working capital. Ex-Im generally charges a fee of 1.75 percent of the direct loan or loan guarantee amount. Ex-Im does not factor political risk into its fees for this product because the obligors are U.S. exporters. Ex-Im also generally does not differentiate between the credit risk of different obligors. Short-term insurance. Ex-Im’s fees include a premium that is based on the length of the credit, the type of entity purchasing the export (i.e., a foreign government, financial institution, or nonfinancial institution), and the OECD country risk classification for the country of the obligor. Some short-term insurance programs also factor into the premium amount the credit risk of the obligor. In 2011, Ex-Im conducted internal analyses to help ensure that the fees it charges are sufficient to cover losses. For instance, Ex-Im officials told us that in 2011 they determined that the credit subsidy rate for the working capital program was positive by 9 basis points (0.09 percent), indicating that fee levels for this program were not sufficient to cover losses. As a result, in 2012 Ex-Im raised fees for the working capital program from 1.5 percent to 1.75 percent (or 25 basis points) of the direct loan or loan guarantee amount to avoid the need for an appropriation to cover the credit subsidy costs. Similarly, Ex-Im officials said the 2011 analysis showed that one of Ex-Im’s short-term insurance products had a positive subsidy cost. In response, Ex-Im implemented a more risk-based fee structure to increase fees and make the product credit subsidy cost- neutral. Ex-Im officials said that they will conduct similar product analyses on an annual basis. Whether recent fee changes will avoid the need for a future credit subsidy will depend on the extent to which future losses are consistent with Ex-Im’s historical experience. Ex-Im calculates and reports default rates for its portfolio, but it has not maintained data useful for assessing the performance of newer books of business. Ex-Im has been self-sustaining for appropriations purposes since 2008, but its long-term budgetary impacts are uncertain. As of December 31, 2012, Ex-Im reported a default rate for its active portfolio of 0.34 percent. Ex-Im defines the active portfolio as those transactions for which the maturity date has not been reached or that have reached maturity but are still within the time frame during which a claim can be submitted. Ex-Im calculates the default rate as the sum of net claims paid on loan guarantees and insurance transactions and unpaid past due installments on direct loans divided by disbursements. Ex-Im’s default rate declined steadily from about 1.6 percent as of September 30, 2006, to just under 0.3 percent as of September 30, 2012, before edging up slightly by the end of the calendar year. However, this downward trend should be viewed with caution because Ex-Im’s portfolio contains a large volume of recent transactions that have not reached their peak default periods. Recent transactions have had limited time to default and may not default until they are more seasoned. For example, according to Ex-Im, the peak default period for long-term loan guarantees—which represent almost 57 percent of Ex-Im’s 2012 exposure—is about 3.9 years after authorization. As of the end of 2012, about 53 percent of Ex-Im’s active long-term guarantees (in dollar terms) had been authorized within the last 4 years. Therefore, the ultimate impact of Ex-Im’s recent business on default rates is not yet known. As of December 31, 2012, Ex-Im’s reported default rate varied by product type, region, and industry. For example, default rates were 0.14 percent for short-term products (working capital loan guarantees and insurance), 7.50 percent for medium-term products (direct loans, loan guarantees, and insurance), and 0.20 percent for long-term products (direct loans and loan guarantees). Among all products, the default rate ranged from a low of 0.07 percent for working capital loan guarantees to a high of 8.74 percent for medium-term insurance. Across regions, default rates ranged from 0.002 percent in Oceania to 0.58 percent in Asia. Across Ex- Im’s largest industry sectors, default rates ranged from 0 percent in oil and gas to 0.71 percent in manufacturing. A technique called vintage analysis is useful for examining the performance of growing portfolios, but Ex-Im has not maintained the data necessary to conduct such analysis. Vintage analysis separates and compares the performance of seasoned cohorts and newer cohorts at comparable points in time (for example, a certain number of years after authorization). This technique can help evaluate the credit quality of recent business by comparing the early performance of these cohorts with the early performance of older cohorts. As such, it can provide early warning of potential performance problems in newer business. Federal banking regulator guidance suggests that banks conduct vintage analysis to help manage growing portfolios. For example, the Federal Deposit Insurance Corporation’s Risk Management Manual of Examination Policies states that loan review and monitoring analysis should consider the effects of portfolio growth and seasoning and that vintage analysis can be used to do this. In addition, interagency guidance from federal banking regulators states that reporting from management information systems should include vintage analysis and that such analysis helps management understand historical performance trends and their implications for future default rates. Although Ex-Im information systems produce quarterly performance snapshots of individual cohorts, the systems overwrite the snapshots with each quarterly update, according to Ex-Im officials. Because Ex-Im has not retained historical cohort-level performance data, it is unable to compare the performance of different cohorts at comparable points in time. Ex-Im officials said that they use several tools to provide early warning of performance problems, including monitoring individual transactions of more than $1 million, maintaining an Obligors of Concern List, and analyzing monthly and annual trends in claims. However, by not maintaining the information necessary to conduct vintage analysis, Ex-Im’s ability to understand the early performance of recent cohorts and implications of this performance on future default rates may be limited. Additionally, as previously noted, the lack of point-in-time performance data may reduce the precision of Ex- Im’s loss estimation model. Another measure of portfolio performance is the proportion of credit- impaired (impaired) assets to Ex-Im’s total exposure. Ex-Im defines impaired assets as delinquent direct loans, loan guarantees, and claims with an amount of $50,000 or more past due at least 90 days; rescheduled direct loans, loan guarantees, and claims; or nondelinquent A direct loans, loan guarantees, and claims above a certain risk rating. substantial portion of Ex-Im’s impaired assets are from transactions that preceded the implementation of credit reform in 1992. For example, from 2008 through 2012, pre-credit reform transactions accounted for about 50 to 60 percent of impaired assets each year. As a percentage of total exposure, Ex-Im’s impaired assets generally declined over that period (see fig. 7). In 2008, Ex-Im had about $3.4 billion in impaired assets, which represented approximately 6 percent of total exposure at that time. In 2010, the corresponding figures were about $4.4 billion and 5.8 percent. In 2012, impaired assets were approximately $2.6 billion, or about 2.5 percent of Ex-Im’s total exposure for that year. Again, the trend in this performance measure should be interpreted cautiously, because Ex-Im’s portfolio was growing during this period, which resulted in more of its portfolio being of recent vintage. Ex-Im has been self-sustaining since 2008. Each year, Ex-Im is appropriated a specified amount of funds for administrative costs and However, since 2008, appropriation acts have credit subsidy costs.required Ex-Im to repay appropriated funds dollar-for-dollar with offsetting collections so that the result is a net-zero appropriation. Ex-Im’s offsetting collections are generated by transactions that are initially estimated to result in negative credit subsidies when fees collected from obligors are estimated to be greater than estimated losses (net of recoveries). For example, for 2012, Ex-Im was appropriated about $90 million for administrative costs and $58 million for credit subsidy costs and also authorized to retain up to $50 million in offsetting collections. That year, Ex-Im generated about $1 billion in collections. With these funds, Ex-Im reimbursed Treasury for the appropriation of administrative costs. In addition, Ex-Im retained $108 million—the $58 million for credit subsidy costs plus the $50 million in retained offsetting collections—for obligations occurring within the next 3 years. Unlike the administrative costs appropriation, which Ex-Im must repay in the same year as received, Ex- Im has 3 years to repay the credit subsidy appropriation and obligate the $50 million it retained in offsetting collections. The remaining collections, roughly $800 million, were sent to Treasury. According to Ex-Im, since the implementation of FCRA, it has sent about $5.8 billion more to Treasury than it has received in appropriations. From 1992 through 2012, Ex-Im was appropriated about $9.8 billion for credit subsidy costs and administrative costs. Over the same period, Ex- Im sent about $15.6 billion to Treasury as a result of credit subsidy reestimates ($12 billion), cancelled authority ($1.6 billion), returned collections ($1.3 billion), and rescissions ($675 million).billion of the $5.8 billion net return to Treasury occurred from 2008 through 2012. We determined that Ex-Im’s figures for appropriations received and amounts sent to Treasury were reasonable based on our analysis of Ex-Im appropriations acts, budget appendixes, and financial statements from 1992 through 2012. downward reestimates in the early- to mid-2000s were due primarily to a switch from standard loss rates prescribed by OMB to loss rates that reflected Ex-Im’s historical experience, which tended to be lower. Ex-Im officials attributed the upward reestimates in 2010 to changes they made that year in their loss estimation model to account for increased loss experience in 2009 and uncertainty stemming from the global financial crisis. Ex-Im officials said that the upward reestimates for 2011 and 2012 for direct loans stemmed from declines in obligor interest rates, which reduce estimated cash flows. In addition, Ex-Im officials said they expected further upward reestimates due to modeling changes they made in 2012, including the addition of the qualitative factors discussed previously. These modeling changes will be reflected in the subsidy estimates and reestimates in the 2014 budget. The extent to which Ex-Im will continue to send more funds to Treasury than it receives in appropriations and permanent, indefinite budget authority will depend partly on future credit subsidy reestimates. Credit subsidy estimates are based, in part, on economic assumptions that are uncertain and can change from year to year. In addition, the estimates are developed using Ex-Im’s loss estimation model, which is not intended to capture the impact of unexpected economic scenarios that could substantially affect Ex-Im’s losses. Therefore, changes in underlying assumptions or adverse economic events could result in upward subsidy reestimates that may require drawing on permanent and indefinite budget authority. Ex-Im uses a number of risk-management techniques throughout the different stages of a transaction, which include underwriting, monitoring and restructuring, and claims and recovery. In January 2013, Ex-Im completed a comprehensive revision of its policies and procedures manual that covers each stage. Ex-Im manages risks through the underwriting process in several ways. First, Ex-Im produces a Country Limitation Schedule (CLS) that specifies the types of transactions eligible for financing in each country and the conditions under which they are eligible. For example, in some countries, Ex-Im will not provide financing because the credit and political risks are deemed to be too high or because of legal prohibitions. In countries where Ex-Im does business, Ex-Im may only provide financing for transactions of certain durations or for either public- or private-sector borrowers. Ex-Im has basic eligibility requirements for obligors. For example, an obligor must not have been suspended or debarred from doing business with the U.S. government and may be required to have been in the same line of business for a specified number of years. Requirements for obligors also vary by product type and transaction length. For transactions that meet CLS and eligibility requirements, Ex-Im assigns a risk rating used to determine whether there is a reasonable assurance of repayment. As previously discussed, the ratings range from 1 (least risky) to 11 (most risky). For transactions conveying the full faith and credit of a foreign government, Ex-Im officials apply the ICRAS sovereign risk rating. ICRAS ratings for sovereign obligors are based on macroeconomic indicators, such as indebtedness levels, balance-of- payments factors, and political and social factors. For most private-sector transactions, Ex-Im officials use the private-sector ICRAS rating as a baseline and adjust that rating depending on their assessment of the obligor’s creditworthiness and other factors. ICRAS ratings for private- sector transactions in a country are based on qualitative and quantitative assessments of the depth of private-sector business activity in a country, the strength of private-sector institutions, foreign exchange availability, political stability, and other factors. Ex-Im officials assess obligors’ creditworthiness by reviewing information including financial statements and corporate credit ratings. For more complex transactions, Ex-Im considers additional information to develop the risk rating. For example, for project finance transactions, Ex-Im considers the allocation of risk among project participants, the financial strength of the project, and market pricing of project inputs and outputs. Ex-Im generally does not authorize transactions with risk ratings over 8. In addition to the CLS and risk rating, Ex-Im uses other processes, standards, and conditions in underwriting transactions. Examples of these include the following: Due diligence process. Ex-Im reviews information related to the integrity of the transaction and the character and reputation of the participants. For example, Ex-Im determines whether it has had adverse prior experience with a participant or if the participant presents a risk due to poor references or investigations by local legal or regulatory authorities. Collateral standards. As applicable, Ex-Im requires assets to secure the transactions and prefers the asset value to exceed the loan value in most transactions. For example, working capital loan guarantees must be secured by raw materials, finished goods, accounts receivable, or other specified assets. Additionally, each Ex-Im aircraft transaction is secured not only by the aircraft being financed under that transaction, but also by any other aircraft Ex-Im is currently financing for the obligor. Risk-sharing conditions. These conditions require lenders and exporters to share a percentage of the credit risk with Ex-Im. For example, for working capital loan guarantees, Ex-Im guarantees 90 percent of the principal and interest on a loan issued by a private lender. In the event of a claim, Ex-Im reimburses the lender for 90 percent of both the outstanding principal balance of the loan and accrued interest, and the lender is responsible for the remaining 10 percent. Ex-Im monitors the performance of all medium-term direct loan, loan guarantee, and insurance transactions and long-term direct loan and loan guarantee transactions above $1 million to help contain risk. Ex-Im conducts ongoing reviews of these transactions to identify and address any deterioration in credit quality before the obligor defaults. This includes assessment of the operating environment and financial condition of the obligor to determine whether or not there have been changes that might increase or decrease credit risk. Ex-Im updates a transaction’s risk rating at least annually to reflect any changes in credit risk, which, in turn, affects the estimated credit subsidy cost and loss reserve or allowance associated with the transaction. Specific monitoring activities include evaluating the capacity of obligors to repay their debts, reviewing the value of pledged collateral, and staying abreast of actions by the obligor to respond to adverse market changes, and on-site visits at crucial project milestones (as applicable). Through the monitoring process, Ex-Im develops a Watch List, which tracks transactions that show signs of impairment, and an Obligors of Concern List, which tracks transactions that are impaired. These transactions are subject to more frequent monitoring than other transactions. In addition, monitoring staff share these lists with the Office of the Chief Financial Officer and other senior management to keep them informed of emerging credit issues. According to Ex-Im, no lenders failed the 156 examinations conducted from 2008 through 2012. Im. Further, in 2012, Congress directed Ex-Im to improve and clarify its due diligence procedures.expected to have the revised procedures completed by the summer of 2013. In response, Ex-Im officials said they Ex-Im restructures transactions with credit weaknesses to help prevent defaults and increase recoveries on transactions that do default. According to Ex-Im, restructuring can involve substantial revision of transaction terms and conditions. For example, in 2012, Ex-Im restructured a defaulted project finance transaction into a direct loan with the implicit backing of a foreign government. Restructuring can also involve the addition of credit enhancements such as extra collateral or third-party guarantees. According to Ex-Im, the agency restructures as many as eight transactions per year. According to Ex-Im officials, the agency is developing a dedicated restructuring team to help reduce the workload of staff currently responsible for both monitoring and restructuring tasks. In addition, they indicated that restructuring staff inform underwriting staff of trends in credit deteriorations or problems with particular borrowers to help ensure that any lessons learned are applied to future transactions. Ex-Im pays claims when a loan that it has guaranteed or an insurance policy that it has issued defaults. Ex-Im tries to minimize losses on claims paid by pursuing recoveries. For example, Ex-Im takes steps to collect on the assets of the obligors, which can include the collateral backing a transaction. For all products combined, Ex-Im’s recovery rate— the total amount recovered divided by the total amount of claims paid plus recovery expenses—was about 50 percent on average from 1994 through 2012. In addition, when Ex-Im pays a claim for a loan guarantee that is denominated in a foreign currency, Ex-Im manages its foreign-currency risk by purchasing the foreign currency to pay the claim to the lender and then seeks recovery on the U.S. dollar equivalent, which represents the obligor’s debt obligation. This policy effectively shifts the foreign-currency risk from Ex-Im to the obligor after a claim has been paid. In September 2012, the Ex-Im IG issued a report on Ex-Im’s management of risk at the overall portfolio level. On the basis of industry best practices, the report made a number of recommendations to improve Ex- Im’s portfolio management in areas such as stress testing, portfolio concentrations, and risk governance. Our review of federal internal control standards and industry practices suggests that the IG’s recommendations in these areas represent prudent risk-management techniques. Ex-Im has begun to implement some of the IG’s recommendations and is in the process of analyzing others to determine their applicability to Ex-Im and the risk-management benefits that could be gained from them. The Ex-Im IG recommended that Ex-Im develop a systematic approach to stress testing its portfolio that would be conducted at least annually as part of the process for reestimating credit subsidies. A stress test is a “what-if” scenario that is not a prediction or expected outcome of the economy. Stress testing is one tool to measure the vulnerability of portfolios to unexpected losses—that is, losses associated with extreme yet plausible events. The IG stated that in light of concentrations in Ex- Im’s portfolio, stress testing would provide Ex-Im information on how its portfolio would react to shocks in financial markets. Ex-Im agreed to implement this recommendation. Stress testing is consistent with our internal control standards and industry practices. For example, our internal control standards state that agencies should have adequate mechanisms to identify risks arising from external factors and analyze the possible effects of these risks.addition, in its best practices manual on credit portfolio management, the In International Association of Credit Portfolio Managers (IACPM) states that institutions should conduct stress testing to inform management about the portfolio’s vulnerabilities and to establish the portfolio’s sensitivity to risk factors. Similarly, guidance from regulators of federal financial institutions notes that the recent financial crisis underscored the need for banking organizations to incorporate stress testing into their risk- management practices.credit agencies with which we spoke conduct stress testing on their portfolios. For example, officials from one ECA told us that they conduct stress tests every 6 months using scenarios related to current world issues to determine the impact those scenarios would have on obligors. Furthermore, the foreign ECAs and U.S. federal Ex-Im officials stated that they have conducted ad hoc stress tests in the past, but have been developing a systematic approach. This approach will involve assessment of (1) how the entire portfolio or portions of the portfolio would be affected by extreme economic events and (2) the impact that particular adverse scenarios may have on specific obligors. Ex-Im officials told us that they will first stress test the aircraft portfolio, which accounts for about 50 percent of the agency’s exposure. According to Ex-Im, the stress test results will be included in a quarterly internal report on the financial status of Ex-Im’s portfolio. Ex-Im officials stated that the results of the stress testing will be used to inform the loss modeling process and will be used by senior management in making decisions about the agency’s resource allocations and strategic planning efforts. Ex-Im officials also indicated that they intend to share their stress testing and loss modeling methodologies with other federal credit agencies so that others may benefit from Ex-Im’s efforts. Ex-Im has not yet made plans to report its stress scenarios and stress test results to Congress. Such reporting could help Congress oversee Ex- Im’s activities by providing additional information on Ex-Im’s risk exposure. Through provisions in the Export-Import Bank Reauthorization Act of 2012, Congress has required Ex-Im to provide analysis of the agency’s default rates and risk of loss associated with its increased exposure limits. Information on Ex-Im’s stress testing would complement that analysis by disclosing the magnitude of losses that Ex- Im could face under adverse scenarios. Additionally, reporting such information would be consistent with our internal control standards, which indicate that communications with external parties, including Congress, should provide information that helps them better understand the risks facing the agency. As previously discussed, Ex-Im’s portfolio is concentrated in certain industries, regions, and obligors. These concentrations expose Ex-Im to the risk associated with negative events in those market segments. In light of these concentrations, the Ex-Im IG recommended that Ex-Im implement “soft portfolio concentration sublimits”—that is, informal thresholds for the portion of total exposure within different segments of the portfolio. The IG recommended that Ex-Im set the soft portfolio sublimits by industry, geography, or transaction risk rating and use them as internal guidance to inform future pricing and portfolio risk- management decisions (e.g., ways to diversify the portfolio). According to IG officials, the establishment of soft portfolio sublimits (as opposed to hard limits) would help Ex-Im manage portfolio concentrations without restricting its ability to meet exporters’ demand for financing or adversely affecting Ex-Im’s competitiveness with other ECAs. Portfolio sublimits represent one technique for managing a “risk appetite”—that is, the amount of risk an institution is willing to accept. Setting a risk appetite is consistent with our internal control standards, which state that agencies should develop an approach for risk management based on how much risk can be prudently accepted. Additionally, industry best practices identified by the Committee of Sponsoring Organizations of the Treadway Commission (COSO), the Institute of International Finance (IIF), and IACPM cite the establishment of risk appetite, including through portfolio sublimits, as a sound risk- management practice.including the Overseas Private Investment Corporation (OPIC) and two foreign ECAs, set a risk appetite by establishing limits on the volume of financing they provide to different industries, countries, or obligors. Setting a risk appetite can help senior management determine the point at which the institution’s exposure has reached a level that may require implementation of additional risk controls. Some organizations with which we spoke, As of December 2012, Ex-Im had not established soft portfolio sublimits. However, Ex-Im officials said that they were evaluating whether this practice was suitable for the agency in light of potential implications for Ex-Im’s ability to meet client demands and competitiveness with other ECAs. Given the potential benefits of this risk-management practice, following through on this evaluation will be important for Ex-Im. Furthermore, if it determines that soft portfolio sublimits are appropriate, following industry guidance for setting a risk appetite will also be important. For example, guidance issued by COSO in January 2012 states that in developing a risk appetite an institution should consider its existing risk profile (current level and distribution of risks); risk capacity (the amount of risk that an organization is able to support); risk tolerance (the acceptable level of variation an organization is willing to accept); and stakeholders’ attitudes towards growth, risk, and return. In its September 2012 report, the Ex-Im IG also stated that Ex-Im’s risk governance structure was not commensurate with the size, scope, and strategic ambitions of the institution. Among other things, the IG noted that Ex-Im lacked an official responsible for managing the full spectrum of risks facing the agency and developing risk-management strategies. The IG recommended that Ex-Im create the position of chief risk officer (CRO) to oversee the design and implementation of an enterprisewide risk- management function. Industry best practices and corporate governance principles of the Basel Committee on Banking Supervision highlight the importance of having focal points for all the activities required to manage enterprisewide risks. For example, best practices published by IIF state that financial firms should assign responsibility for risk management to an officer at a senior level, in most cases a CRO. The Global Association of Risk Professionals has indicated that the typical roles of a CRO include establishing risk- management policies and procedures consistent with entitywide policies, reviewing and approving models used for pricing and risk measurement, measuring risk on a global basis as well as monitoring exposures and changes in risks, and communicating risk-management results to senior management. OPIC and some foreign ECAs with which we spoke have CROs and cited benefits of this function, including risk assessment that is independent from other business functions. For example, OPIC officials said that OPIC’s CRO function is carried out by a small unit led by a Director of Risk Management that reports directly to the agency’s Chief Financial Officer. Ex-Im does not have a centralized CRO function and instead distributes responsibilities for risk management to several parts of the organization, including the Office of the Chief Financial Officer, Office of General Counsel, Credit Management Group, and Credit Policy Committee. For example, the Office of the Chief Financial Officer’s responsibilities include loss modeling, determining credit subsidy estimates, and portfolio monitoring. The Office of General Counsel’s responsibilities include conducting due diligence on transaction participants to manage reputational risk and assisting in documenting transactions. The Credit Management Group takes the lead in reviewing and recommending broad credit policy and underwriting standards. Finally, the Credit Policy Committee is responsible for formulating, coordinating, and making recommendations to Ex-Im’s Board of Directors in the areas of country risk, sovereign and private-sector risk, changing or modifying CLS, and addressing other risk issues. As of February 2013, Ex-Im officials told us that they were analyzing the possibility of establishing a more centralized CRO function. The officials said that in performing this analysis, they were reviewing other organizations that have a CRO, including OPIC, the World Bank, the International Finance Corporation, and the African Development Bank. Careful consideration of the potential benefits of a CRO function and the extent to which the agency’s current structure comprehensively addresses enterprisewide risks is critical given Ex-Im’s growing financial exposure. Further, taking into account the potential expansion of its risk- management activities, such as the implementation of soft sublimits and regular stress testing, will be important for Ex-Im’s analysis. In addition to the three recommendations discussed previously, the IG recommended that Ex-Im: (1) develop a systematic approach for modeling portfolio risk, including identifying appropriate qualitative risk factors; (2) with the assistance of external experts, implement a formal framework for the use of financial models, including procedures for model validation; (3) review risk metrics and reporting procedures to enhance transparency and to better inform key stakeholders; and (4) amend its by- laws to provide for oversight of an agencywide risk-management function by Ex-Im’s Board of Directors. Ex-Im has taken actions to address the first three of these recommendations. As already noted, Ex-Im incorporated qualitative risk factors into its loss estimation model. In addition, Ex-Im hired a contractor to serve as an external expert in reviewing and analyzing Ex-Im’s loss estimation model and plans to conduct external validation of future financial models. Ex-Im also began issuing a quarterly default report and is identifying portfolio management best practices—including risk metrics and reporting procedures—through a review conducted by subject-matter experts. Ex-Im disagreed with the fourth recommendation. Ex-Im stated that the agency’s charter does not provide this oversight function to the Board of Directors, but rather provides the President of Ex-Im broad operational authority for the management of Ex-Im, including oversight of all of Ex-Im’s risk- management functions. Ex-Im’s annual authorizations increased from about $12 billion in 2006 to nearly $36 billion in 2012, an increase of about 195 percent. Over the same period, Ex-Im’s staff level, as measured by full-time equivalents (FTE), increased from 380 to 390 FTEs, about 3 percent (see fig. 9). The rapid increase in business volume, coupled with a modest growth in FTEs, creates potential operational risks for Ex-Im. If demand for Ex-Im’s services exceeded its capacity, the agency’s ability to properly underwrite and monitor transactions might suffer. Agencywide, the average dollar amount of annual authorizations per FTE rose from $32 million in 2006 to about $92 million in 2012, an increase of more than 150 percent. Over the same period, the number of transactions per FTE rose from 7.0 to 9.7, an increase of 38 percent. Ex-Im acknowledged that its current resources would not be sufficient for the high levels of activity it expected to see in the coming years. In addition, Ex-Im division managers with whom we spoke noted the strain of the increased and increasing workloads on employees and said they could use additional staff. Ex-Im officials stated that risks to the agency have been increasing as a result. While the officials told us that the increased business volume primarily had affected the underwriting function, the impact had been mitigated somewhat by the agency’s delegation of some underwriting to private lenders for working capital loan guarantees. However, the officials said that Ex-Im’s other transaction-related functions, including legal and monitoring activities, were expected to have significantly higher workloads as transactions complete the underwriting phase and move on to other phases. Ex-Im has taken some steps to manage its increased workload. Ex-Im asked for additional administrative resources in its annual budget requests, in part to hire more staff. For example, in its 2013 budget request, Ex-Im requested a $7 million increase in administrative resources to support underwriting and small business outreach. While acknowledging the constrained federal budget environment, Ex-Im officials said that future budget requests likely also would request resources for additional staff. In the interim, Ex-Im officials said that when vacancies occurred, they allocated the positions to areas of highest need rather than automatically refilling the vacancies. Ex-Im also hired a consultant to identify best practices for improving operational efficiency of the monitoring function. In addition, Ex-Im officials said they planned to update the agency’s 2009-2012 Human Capital Plan following a forthcoming revision to Ex-Im’s strategic plan. Ex-Im’s workforce planning process involves assessing its current workforce, anticipating future needs, analyzing gaps, and developing strategies to address those gaps. Although Ex-Im has acknowledged growing risks associated with its increasing workload, it has not formally determined the level of business it can prudently manage—either agencywide or within specific functional areas—with a given level of resources. For example, while Ex-Im has reported the average number and dollar amount of authorizations per FTE, officials stated that they have not determined the level at which operational risks are too high. Additionally, Ex-Im officials within different functional areas were unable to provide formal, documented assessments of resource needs. As previously noted, our internal control standards state that agencies should develop an approach for risk management based on how much risk can be prudently accepted. In addition, these standards indicate that agencies should decide upon specific control activities to manage or mitigate risks entitywide and at each activity level. Ex-Im officials said the dramatic increase in business was not anticipated and that the agency historically did not need to make major workforce adjustments because its business volume was stable. However, without benchmarks to determine when workload levels have created too much risk, Ex-Im’s ability to monitor and manage operational risks associated with its already increased business volume may be limited. Monitoring workloads against such benchmarks would help Ex-Im determine when additional steps—such as tightening underwriting standards or increasing requirements for lender participation—may be needed to mitigate Ex-Im’s increased risk. Moreover, legislated increases in Ex-Im’s exposure limits provide room for additional increases in Ex-Im’s business volume, and thus Ex-Im could continue to experience strains on its workforce. In recent years, Ex-Im has assumed an increased role in supporting the export of U.S. goods and services. In part, this increase resulted from a decline in the availability of private-sector credit that accompanied the 2007-2009 financial crisis. For several years, Ex-Im has been self- sustaining for budgetary purposes, although the long-term cost of Ex-Im’s new business is not yet known. In addition, Ex-Im has made recent improvements to its risk management, including enhancements to its loss estimation model and plans for a more systematic approach for stress testing its portfolio. However, the growth in Ex-Im’s portfolio and the spectrum of risks Ex-Im faces underscore the need for continued improvements in risk management. Recommendations made by the Ex- Im IG in September 2012 and further supported by our work point to additional steps that Ex-Im could take to strengthen its risk-management framework. These steps include establishing soft portfolio sublimits and assessing the benefits of a more centralized CRO function. Following through on these recommendations will be critical to help manage the risks and challenges associated with the agency’s greater financial exposure. In addition, our work identified other opportunities for Ex-Im to improve how it monitors, manages, and reports on the risks it faces. First, while Ex-Im added qualitative factors to its loss model in 2012, the factor that adjusts loss estimates for potential changes in global economic conditions uses a 1-year forecast for speculative-grade corporate bond defaults for all its transactions, regardless of their length. Because many of Ex-Im’s transactions span multiple years, a 1-year default forecast may not represent the best available data for making default adjustments for these transactions. The use of default forecasts or other economic data with a longer time horizon may produce more reliable loss estimates and would be consistent with FASAB guidance on using the best available data for developing cash flow projections. Second, Ex-Im has not maintained the data necessary to conduct vintage analysis, a technique federal banking regulators have cited as useful for monitoring growing portfolios. Once a sufficient amount of data has been retained, such an analysis could help Ex-Im to assess the early performance of new books of business by providing comparisons to seasoned books at a comparable point in time. It could also provide Ex-Im an additional early warning indicator to assist Ex-Im in taking timely actions to mitigate emerging risks. Such data also have the potential to strengthen Ex-Im’s future loss modeling efforts by providing additional information about when defaults occur over the life of a transaction. Third, Ex-Im has made progress toward implementing a systematic approach to stress testing its portfolio, but has not yet made plans to report the scenarios and results to Congress. Providing this information to Congress—potentially as part of Ex-Im’s annual report— would be consistent with federal internal control standards for effective external communication and would aid congressional oversight of the agency. Finally, although Ex-Im has recognized and taken some steps to address workload challenges, it has not developed benchmarks for the level of business it can properly support with a given level of resources. This is contrary to federal internal control standards, which indicate that agencies should develop a risk-management approach based on how much risk can be prudently accepted. Ex-Im’s workload challenges may continue to grow because of increases in Ex-Im’s exposure and exposure limit, coupled with resource constraints in the current budgetary environment. In the absence of workload benchmarks, Ex-Im lacks a sound basis for workforce planning and for determining when additional control activities might be needed to manage operational risks. We recommend that the Chairman of the Export-Import Bank of the United States take the following four actions: To help improve the reliability of its loss estimation model, Ex-Im should assess whether it is using the best available data for adjusting loss estimates for longer-term transactions to account for global economic risk. To conduct future analysis comparing the performance of newer and older business and to make future enhancements to its loss estimation model, Ex-Im should retain point-in-time, historical data on credit performance. To help Congress better understand the financial risks associated with Ex-Im’s portfolio, Ex-Im should report its stress test scenarios and results to Congress when such information becomes available. To help manage operational risks stemming from Ex-Im’s increased business volume, Ex-Im should develop workload benchmarks at the agencywide and functional area levels, monitor workload against these benchmarks, and develop control activities for mitigating risks when workloads approach or exceed these benchmarks. We provided a draft of this report to Ex-Im for its review and comment. In written comments, which are reproduced in appendix II, Ex-Im agreed with our recommendations. Ex-Im also provided technical comments that we incorporated into the final report, as appropriate. In its written comments, Ex-Im said it would begin to implement all four of our recommendations in fiscal year 2013. Specifically, Ex-Im said it would implement our recommendation to assess data for adjusting loss estimates for longer-term transactions as part of a spring 2013 reevaluation of its loss estimation model. Concerning our recommendation that Ex-Im retain point-in-time data on credit performance, Ex-Im said it had already begun doing so and would use these data to compare the performance of newer and older books of business and to enhance its loss estimation model. Ex-Im also agreed with our recommendation that it provide stress testing scenarios and results to Congress and said it would include the results of its stress tests in the default reports it submits to Congress. Ex-Im did not indicate whether it would also include its stress test scenarios in the default reports. Because stress testing results are only meaningful in the context of the stress scenarios used, our recommendation emphasizes reporting both types of information to Congress. Finally, concerning our recommendation that Ex-Im set workload benchmarks to help manage operational risk, Ex-Im said it planned to form an Enterprise Risk Committee consisting of senior management from the business, financial, legal, policy, resource, and risk-management areas. Ex-Im stated that operational risk would be one of the first areas the committee examines. We are sending copies of this report to appropriate congressional committees and the Chairman of the U.S. Export-Import Bank. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or sciremj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to examine: (1) how the U.S. Export-Import Bank’s (Ex-Im) business changed in recent years and possible reasons for these changes; (2) how Ex-Im determines credit subsidy costs, loss reserves and allowances, and product fees, and how these processes account for different risks; (3) how Ex-Im’s financial portfolio has performed and the budgetary impact of its programs; and (4) the extent to which Ex-Im has a comprehensive risk-management framework. To assess how Ex-Im’s business changed in recent years and possible reasons for these changes, we analyzed information on Ex-Im’s financial exposure and authorizations, including data from Ex-Im annual reports and data compiled by the agency from its Ex-Im Bank Reporting System. We examined Ex-Im’s total exposure levels for each year from 1990 through 2012 to identify growth trends both in nominal and inflation- adjusted terms. We also examined Ex-Im’s annual authorizations for each year from 2006 through 2012. We chose that period in order to cover Ex- Im’s pre- and post-financial crisis business activity. We analyzed trends in the dollar volume of Ex-Im’s total authorizations each year, the volume of total U.S. exports, as well as Ex-Im authorization levels disaggregated by product type, region, and industry sector. To identify possible reasons for trends in Ex-Im’s business activity, we reviewed a variety of Ex-Im documents, including annual reports, competitiveness reports, the agency’s strategic plan for 2010-2015, and reports issued by Ex-Im’s Office of Inspector General (IG). We also reviewed relevant reports by the Basel Committee on Banking Supervision, academics, and foreign export credit agencies (ECA). Because Ex-Im can play a countercyclical role in export finance (i.e., expanding when private sector credit is retrenching), we also analyzed data related to the availability and cost of credit from 2006 through 2012. In particular, we analyzed (1) calendar 2006-2012 survey data from the Board of Governors of the Federal Reserve System and the European Central Bank on the percentage of commercial lenders that were tightening or easing lending standards and (2) calendar 2006- 2012 data on corporate bond risk premiums from Bank of America-Merrill Lynch. We also analyzed International Monetary Fund data on the volume of U.S. exports over fiscal years 1990 through 2012 in order to compare changes in export volume with changes in Ex-Im authorizations. Additionally, we interviewed Ex-Im officials and six representatives of industry trade associations and research organizations about reasons for changes in the agency’s business volume. We selected the trade association and research organization representatives to interview on the basis of a literature review of relevant published articles, prior GAO work on Ex-Im and international trade issues, and recommendations from knowledgeable federal agency and industry officials about individuals with expertise on Ex-Im’s activities or export financing generally. Our literature review focused on publications that cited Ex-Im, export credit agencies, trade finance, and export finance. Three of the entities we interviewed represented industry trade associations (the Coalition for Employment through Exports, the Berne Union, and the Bankers’ Association for Finance and Trade and International Financial Services Association) and three represented research groups (Peterson Institute for International Economics, the Rhodium Group, and the Research Division of the U.S. International Trade Commission). Further, to obtain perspectives on ECA growth generally, we conducted telephone interviews with officials from four foreign ECAs: Export Development Canada (Canada), Compagnie Française d’Assurance pour le Commerce Extérieur (France), UK Export Finance (United Kingdom), and Japanese Bank for International Cooperation (Japan). We selected theses ECAs based on their similarity to Ex-Im in terms of: (1) their role in supplementing rather than competing with private markets for export credit support, (2) the types of export credit products they offer, and (3) the presence of a small business directive or mandate. To examine how Ex-Im determines subsidy costs, loss reserves and allowances, and product fees, and how these processes account for different risks, we reviewed relevant requirements and guidance. This included the Federal Credit Reform Act of 1990; the Office of Management and Budget (OMB) Circular No. A-11 (Preparation, Submission, and Execution of the Budget); the Statement of Federal Financial Accounting Standards No. 2 (Accounting for Direct Loans and Loan Guarantees); the Federal Accounting Standards Advisory Board’s Federal Financial Accounting and Auditing Technical Release 6 (Preparing Estimates for Direct Loan and Loan Guarantee Subsidies under the Federal Credit Reform Act); and the Organisation for Economic Cooperation and Development Arrangement on Officially Supported Export Credits (OECD Arrangement). We identified types of risks applicable to Ex-Im by reviewing Ex-Im’s documents, including annual reports and policy manuals, as well as prior GAO work on credit programs and risk management. To examine how Ex-Im accounts for these risks, we reviewed information on the tools and processes Ex-Im uses to determine subsidy costs and loss reserves and allowances, including Ex- Im’s loss estimation model. We reviewed documentation on the structure of the model, updates made to the model in 2012, and findings and recommendations made by the Ex-Im IG and Ex-Im’s independent financial statement auditor about the model. We also reviewed relevant workpapers from the independent auditor’s audit of Ex-Im’s 2012 financial statements. We also reviewed Congressional Budget Office and OMB reports on discounting methodologies for federal credit programs. To obtain additional information about Ex-Im’s subsidy cost and loss reserve and allowance calculations, we interviewed Ex-Im and Ex-Im IG officials, representatives from Ex-Im’s independent financial statement auditor, and OMB officials responsible for approving Ex-Im’s subsidy cost estimation methodology. In addition, we spoke with other federal agencies that provide international credit—including the Small Business Administration, the Department of Agriculture’s Foreign Agricultural Service and Farm Service Agency, and the Overseas Private Investment Cooperation (OPIC)—and the four foreign ECAs cited previously about their processes for estimating program costs and reserving for future losses. To obtain information about how Ex-Im sets product fees and what risks they account for, we reviewed fee-setting requirements contained in the OECD Arrangement and Ex-Im analyses used as a basis to adjust fees for different products. An assessment of the appropriateness of the fee levels resulting from the OECD Arrangement was outside the scope of our review. We interviewed Ex-Im officials and officials from the U.S. Department of the Treasury responsible for negotiating for the United States at OECD, including negotiations on minimum premiums. We also discussed with the four foreign ECAs how they set product fees. To assess how Ex-Im’s financial portfolio has performed and the budgetary impact of its programs, we reviewed agency data and documentation—including Ex-Im performance data, annual reports, financial statements, and quarterly default reports—and information contained in the President’s budgets and Federal Credit Supplements. Specifically, to determine how Ex-Im’s portfolio has performed, we analyzed data Ex-Im compiled from the Ex-Im Bank Reporting System on active transactions—including authorized and disbursed amounts, amounts in arrears, claims paid, and recoveries—to calculate overall default rates and default rates by product type. We examined end-of- fiscal-year data for 2006 through 2012 and data as of December 31, 2012. We reviewed federal banking regulator guidance on default monitoring, including vintage analysis, and determined whether Ex-Im conducted or maintained data to perform such an analysis. In addition, we reviewed data on the ratio of Ex-Im’s impaired assets to total exposure from 2008 through 20012. To determine the budgetary impact of Ex-Im’s programs, we reviewed Ex-Im’s analysis of the funds it has been appropriated and the funds it has sent to the U.S. Treasury (the net of upward and downward credit subsidy reestimates, cancelled authority, returned collections, and rescissions) from 1992 through 2012. To do this, we compared Ex-Im’s analysis to data contained in appropriation acts, the President’s budgets, and Ex-Im’s financial statements for the same years. Based on this comparison, we determined that Ex-Im’s analysis was reasonable. Additionally, we analyzed Ex-Im’s annual credit subsidy reestimates for 1992 through 2012 using information in the President’s budgets. We discussed the performance and budget data with knowledgeable Ex-Im officials to ensure that we interpreted the data correctly. To assess the extent to which Ex-Im has a comprehensive risk- management framework, we reviewed the practices Ex-Im uses to manage risks at the transaction, portfolio, and agency level. At the transaction level, we reviewed Ex-Im’s policies and procedures related to the underwriting, monitoring and restructuring, and claims and recovery functions. We also interviewed Ex-Im senior management and division managers responsible for various products about these procedures. To assess how Ex-Im manages risks at the portfolio level, we reviewed a September 2012 report by the Ex-Im IG on Ex-Im’s portfolio risk management and followed up with Ex-Im officials to determine the actions they had taken in response to the report’s recommendations. We also identified relevant criteria in GAO’s Standards for Internal Control in the Federal Government and Internal Control Management and Evaluation Tool and documents from financial industry groups describing sound Additionally, we discussed practices for managing financial portfolios.portfolio and general risk-management practices with officials from the federal credit agencies and foreign ECAs cited previously, a representative from the International Association of Credit Portfolio Management, and Ex-Im officials. Finally, we reviewed information related to potential operational risks stemming from Ex-Im’s increasing business volume and workload and identified relevant criteria from our internal control standards. We limited our work in this area to Ex-Im’s human capital management. Specifically, we analyzed Ex-Im data on the number of full-time equivalents and the number and dollar volume of transactions authorized from 2006 through 2012. We also reviewed Ex-Im’s Human Capital Plan for 2009-2012, Reauthorization Act of 2012 Business Plan, and congressional budget justifications from 2008 through 2013, and internal Ex-Im analyses of agency workloads. Additionally, we interviewed Ex-Im officials responsible for resource management. To assess the reliability of the data provided by Ex-Im, including exposure and authorization amounts and performance statistics, we (1) reviewed information related to data elements, system operations, and controls; (2) performed electronic testing for obvious errors in accuracy and completeness; (3) compared data to published documents; and (4) interviewed Ex-Im officials knowledgeable about the data. To assess the reliability of data we used to describe capital market conditions and U.S. exports, we (1) reviewed related documentation, (2) interviewed knowledgeable officials about the data, and (3) performed electronic testing and inspected the data for missing observations and outliers. We concluded that the data elements we used were sufficiently reliable for purposes of describing Ex-Im’s growth and financial performance. We conducted this performance audit from June 2012 to March 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Steve Westley, Assistant Director; Daniel Alspaugh; Kathryn Bolduc; Marcia Carlsen; Pamela Davidson; Cole Haase; Michael Hoffman; Christine Houle; Susan Irving; Risto Laboski; Felicia Lopez; Colleen Moffatt Kimer; Melissa Kornblau; Robert Pollard; Barbara Roesmann; Jessica Sandler; Eva Su; and Celia Thomas made key contributions to this report. | Ex-Im helps U.S. firms export goods and services by providing a range of financial products. The Export-Import Bank Reauthorization Act of 2012 increased the statutory ceiling on the agency's total exposure to $140 billion in 2014. The act also requires GAO to evaluate Ex-Im's growth and the effectiveness of its risk management. This report discusses (1) how Ex-Im's business changed in recent years and possible reasons for these changes; (2) how Ex-Im determines credit subsidy costs, loss reserves and allowances, and product fees, and how these processes account for different risks; (3) how Ex-Im's financial portfolio has performed and the budgetary impact of its programs; and (4) the extent to which Ex-Im has a comprehensive risk-management framework. To address these objectives, GAO analyzed Ex-Im's financial data, policies and procedures, and processes for calculating program costs and loss reserves. GAO also interviewed Ex-Im officials and other entities involved in export financing. From fiscal year 2008 to fiscal year 2012, the U.S. Export-Import Bank's (Ex-Im) outstanding financial commitments (exposure) grew from about $59 billion to about $107 billion, largely in long-term loans and guarantees. Factors associated with this growth include reduced private-sector financing following the financial crisis and Ex-Im's authorization of direct loans--a product not offered by export credit agencies in some other countries--to fill the gap in private-sector lending. Ex-Im's processes for determining credit subsidy costs, loss reserves and allowances, and fees account for multiple risks. To implement the Federal Credit Reform Act of 1990 and other requirements, Ex-Im calculates subsidy costs and loss reserves and allowances with a loss model that uses historical data and takes credit, political, and other risks into account. Consistent with industry practices, Ex-Im added factors to the model in 2012 to adjust for circumstances that may cause estimated credit losses to differ from historical experience. Opportunities exist to further improve the model. For example, Ex-Im uses a 1-year forecast of certain bond defaults to predict possible changes in loss estimates from changed economic conditions. However, a short-term forecast may not be appropriate for adjusting estimated defaults for longer-term products. Ex-Im's fees are generally risk-based and, for medium- and long-term products (about 85 percent of Ex-Im's exposure), guided by international agreements that set minimum fees that account for credit and political risk. As of December 2012, Ex-Im reported an overall default rate of less than 1 percent. However, Ex-Im has not maintained data needed to compare the performance of newer books of business with more seasoned books at comparable points in time, a type of analysis recommended by federal banking regulators. Also, without point-in-time data showing when defaults occur, the precision of Ex-Im's loss model may be limited. Ex-Im has been self-sustaining since 2008 and has generated receipts for the government. But, because Ex-Im's portfolio contains a large volume of recent transactions, the long-term impact of this business on default rates and the federal budget is not yet known. Ex-Im has been developing a more comprehensive risk-management framework but faces operational risks. Ex-Im manages credit and other risks through transaction underwriting, monitoring, and restructuring. Ex-Im also started addressing recommendations by its Inspector General (IG) about portfolio stress testing, thresholds for managing portfolio concentrations, and risk governance. GAO’s review of internal control standards and industry practices indicates that the IG’s recommendations represent promising techniques that merit continued attention. Ex-Im has not yet made plans to report its stress test scenarios and results to Congress, although doing so would aid congressional oversight and be consistent with internal control standards for effective external communication. Ex-Im faces potential operational risks because the growth in its business volume has strained the capacity of its workforce. Ex-Im has determined that it needs more staff, but it has not formally determined the level of business it can properly manage. GAO internal control standards state that agencies should develop a risk-management approach based on how much risk can be prudently accepted. Without benchmarks to determine when workload levels have created too much risk, Ex-Im’s ability to manage its increased business volume may be limited. Ex-Im should (1) assess whether it is using the best available data for adjusting the loss estimates for longer-term transactions to account for global economic risk, (2) retain point-in-time performance data to compare the performance of newer and older business and to enhance loss modeling, (3) report stress testing scenarios and results to Congress, and (4) develop benchmarks to monitor and manage workload levels. Ex-Im agreed with each of these recommendations. |
To assist workers who are laid off as a result of international trade, Congress passed the Trade Expansion Act of 1962 and created the Trade Adjustment Assistance program. Historically, the main benefits available through the program have been extended income support and training. Participants are generally entitled to income support, but the amount of funds available for training is limited by federal statute. Labor certifies groups of laid-off workers as potentially eligible for TAA benefits and services by investigating petitions that are filed on the workers’ behalf. Workers are eligible for TAA if they were laid off as a result of international trade and were involved in making a product or supplying component parts to or performing finishing work for directly affected firms. Workers served by the TAA program have generally been laid off from the manufacturing sector. Congress has amended the TAA program a number of times since its inception. For example, in 1974 Congress eased program eligibility requirements, and in 1988 Congress added a requirement that workers be in training to receive income support. In 1993 Congress created a separate North American Free Trade Agreement Transitional Adjustment Assistance (NAFTA-TAA) program specifically for workers laid off because of trade with Canada or Mexico. The most recent amendments to the TAA program were included in the TAA Reform Act of 2002 (Pub. L. No. 107-210), which was signed into law in August 2002. The Reform Act consolidated the former TAA and NAFTA-TAA programs into a single TAA program, doubled the amount of funds available for training annually, expanded program eligibility to more workers, extended the time periods covered by the program, and added new benefits. Under the current TAA program, eligible participants have access to a wider range of benefits and services than before, including: Training. Participants may receive up to 130 weeks of training, including 104 weeks of vocational training and 26 weeks of remedial training (e.g., English as a second language or literacy). On-the-job training is also available under TAA. Participants in TAA-approved training must attend training full-time. Extended income support, or Trade Readjustment Allowances (TRA). Participants may receive up to 104 weeks of extended income support benefits after they exhaust the 26 weeks of UI benefits available in most states. This total includes 78 weeks while participants are completing vocational training and an additional 26 weeks, if necessary, while participants are completing remedial training. The amount of extended income support payments in a state is set by statute at the state’s UI benefit level. During their first 26 weeks of extended income support, participants must be enrolled in training, have completed training, or have a waiver from this requirement; to qualify for more than 26 weeks of extended income support, participants must be enrolled in training. The TAA statute lists six reasons why a TAA participant may receive a waiver from the training requirement, including that the worker possesses marketable skills or that the approved training program is not immediately available. States must review participants’ waivers at least every 30 days and if necessary may continue to renew participants’ waivers each month throughout the initial 26 weeks of extended income support. Job search and relocation benefits. Payments are available to help participants search for a job in a different geographical area and to relocate to a different area to take a job. Participants may receive up to a maximum of $1,250 to conduct a job search. The maximum relocation benefit includes 90 percent of the participant’s relocation expenses plus a lump sum payment of up to $1,250. Health Coverage Tax Credit (HCTC). Eligible participants may receive a tax credit covering 65 percent of their health insurance premiums for certain health insurance plans. To be eligible for the credit, trade-affected workers must be either receiving extended income support payments, or they must be eligible for extended income support but are still receiving UI payments, or they must be recipients of benefits under the wage insurance program. As a result, trade-affected workers who are still receiving UI rather than extended income support may register for the HCTC only if they are in training, have completed training, or have a waiver from the training requirement. Wage insurance. The wage insurance program—known as the Alternative TAA (ATAA) program—is a demonstration project designed for workers age 50 and older who forgo training, obtain reemployment within 26 weeks, but take a pay cut. Provided the participant’s annual earnings at his or her new job are $50,000 or less, the benefit reimburses 50 percent of the difference between the participant’s pre- and postlayoff earnings up to a maximum of $10,000 over 2 years. The process of enrolling trade-affected workers in the TAA program begins when a petition for TAA assistance is filed with Labor on behalf of a group of laid-off workers. Petitions may be filed by entities including the employer experiencing the layoff, a group of at least three affected workers, a union, or the state or local workforce agency. The TAA statute lays out certain basic requirements that all certified petitions must meet, including that a significant proportion of workers employed by a company be laid off or threatened with layoff. In addition to meeting these basic requirements, a petition must demonstrate that the layoff is related to international trade. The law requires Labor to complete its investigation, and either certify or deny the petition, within 40 days after it has received it. When Labor has certified a petition, it notifies the relevant state, which has responsibility for contacting the workers covered by the petition, informing them of the benefits available to them, and telling them when and where to apply for benefits. Workers generally receive services through a consolidated service delivery structure called the one-stop system, where they can access a broad range of services beyond TAA, including the Workforce Investment Act (WIA) Dislocated Worker program, the Wagner-Peyser Employment Service (ES) program, and services funded by the WIA National Emergency Grants. Training for trade-affected workers may be funded by TAA or by one of the WIA funding sources. Workers often meet one on one with a case manager who may assess worker’s skills and help decide what services they need. Because the TAA program has limited funds that can be used for case management and program administration, these case management services are often performed by ES or WIA Dislocated Worker program staff. When this occurs, participants are often co-enrolled in WIA or ES as well as TAA. About $750 million was appropriated for income support for trade-affected workers for fiscal year 2005, while another $259 million was appropriated for training, job search and relocation allowances, and administrative costs. Of the $259 million, $220 million is set aside for training, and Labor allocates 75 percent of it to states according to a formula that takes into account each state’s previous year allocations, accrued expenditures, and participant levels. Labor holds the remaining 25 percent of training funds in reserve, to distribute to states throughout the year according to need. To cover administrative costs associated with training under the TAA program, Labor allocates additional administrative funds to each state equal to 15 percent of its training allocation. Labor is responsible for monitoring the performance of the TAA program. In fiscal year 1999, Labor introduced a new participant outcomes reporting system, the Trade Act Participant Report (TAPR), that was designed to collect national information on TAA program participants, services, and outcomes. States are required to submit TAPR reports to Labor each quarter, with data on individuals who exited the TAA program. The TAPR data submitted by states are used to calculate national and state outcomes on the TAA performance measures for each fiscal year, which include (1) the percentage of participants that found jobs after exiting the program (reemployment rate), (2) the percentage of those participants who were employed after exiting the program who were still employed 9 months later (retention rate), and (3) the earnings in their new jobs compared to prior earnings (wage replacement rate). Labor’s guidance requires states to include in their TAPR submissions all TAA participants who exit the program, that is, stop receiving benefits or services. Under Labor’s guidance, a participant is defined as any individual who receives any TAA benefit or service, including extended income support payments, training, or job search and relocation allowances. According to this definition, participants would include those who, for example, received only extended income support and a waiver that allowed them to forgo training. TAPR reports include data on each exiter’s characteristics, services received, and employment outcomes. Data on characteristics, for example, should include the worker’s date of birth, gender, ethnicity, educational level, and layoff date. Data on services received should include data on training (such as dates the participant entered and completed training, and the type of training received), on other TAA benefits received (such as extended income support, job search allowance, and relocation allowance), and on co-enrollment in WIA or other federal programs. Data on outcomes should include the date the worker exited the TAA or other federal program, whether the worker was employed in the first full quarter after exit, whether the worker was employed in the third full quarter after exit, and the worker’s earnings in these quarters. Where possible, outcome data are to be obtained from state UI wage records. Labor uses the TAPR data to track TAA program outcomes against national goals. Unlike the WIA programs, however, TAA has no individual state performance goals, and states do not receive incentives or sanctions based on their performance levels, nor are they otherwise held accountable for their performance. At the national level, the TAA program has failed to meet at least one of its performance goals each year since 2001, the first year for which goals were set. Table 1 shows goals and outcomes for fiscal years 2004 and 2005. In addition to submitting TAPR data, states also submit data to Labor on TAA services and expenditures each quarter through the Form 563. Form 563 includes counts of participants receiving TAA services, while TAPR includes individual-level data on former participants who have exited the program. States are required to submit each quarter’s Form 563 data about 1 month after the end of the quarter. Form 563 includes data on services such as the number of new training participants (by type of training— occupational, remedial, and on-the-job), the number of workers in training at the end of the quarter, the number of training waivers issued, and the number of recipients of job search and relocation allowances, and expenditures on extended income support. In response to an Office of Management and Budget (OMB) initiative, Labor recently began requiring states to implement common performance measures for WIA programs. OMB established a set of common measures to be applied to most federally funded job training programs that share similar goals. Labor further defined the common measures for all of its Employment and Training Administration programs and required states to implement these measures beginning July 1, 2005. Because it operates on a fiscal year rather than a program year basis, Labor required the TAA program to implement the measures by October 1, 2005. In addition to standardizing the performance measures, the common measures guidance also standardizes the definition of exiters across all programs. An exiter is defined as any participant who has not received a service funded by the program or funded by a partner program for 90 consecutive calendar days and is not scheduled for future services. The exit date is defined as the last date of service. For TAA participants, the exit date may be the training completion date, but if additional services are provided after training is completed, or if the participant is continuing to receive TRA, he or she would not be exited from the program. Some services are not significant enough to delay exiting, however. These include receiving UI benefits, some case management services, and postplacement follow-up. The process of collecting and reporting TAA performance data involves all three levels of government. Participant forms and case files are generally collected and organized by frontline staff in local areas, usually at the one- stop. In some states, local staff may enter some of the information into an IT system that is either integrated with the state’s IT system or able to create an electronic file to transmit to the state. In other states, paper case files are physically transferred to state officials for data entry. At the state level, TAA data are often maintained in more than one IT system. For example, benefit payment information is usually in the same IT system that houses Unemployment Insurance payment information. However, information on participant characteristics and services (including status of training and whether or not the individual has exited) resides in one or more other systems. In some states, this participant information remains as a paper case file until it is determined that the participant has exited, and it is time to include him or her in the TAPR submission. To compile the TAPR submission, state agencies administering TAA typically match participant records to their state’s UI wage record system to determine whether these former participants are employed and, if so, the wages they are earning. In some states, staff must manually enter information obtained from the UI wage record system into the TAPR file, while other states have IT systems capable of automatically matching UI data with participants’ records. States may also use the Wage Record Interchange System (WRIS) to match participant records to other states’ UI wage records for participants who found jobs in other states. Some states may link participant records to other partner programs’ IT systems to track activities across programs or to determine if the participant has exited all programs. Once Labor receives the TAPR data, officials perform edit checks and calculate performance levels at the national and state level. TAA performance data are incomplete and may be inaccurate. States report that they are not including all TAA participants in their TAPR performance data, despite Labor’s requirement that all participants be included after they exit the program. In addition, some states may not have documentation to verify the accuracy of participants’ exit dates in TAPR and are not using all available data sources to determine TAA participants’ employment outcomes. Furthermore, 1 state in 5 is using manual rather than automated processes to compile TAPR data, and others have IT systems with limited capacity to control for errors. Having such IT systems could hinder states’ ability to ensure that the data are complete and accurate. However, many states are planning to make improvements to their TAA IT systems’ capabilities this year. Some state TAA officials said that resource constraints have made it difficult to ensure their data are complete and accurate. Many states are not including all exiting participants in the TAPR submissions that Labor uses to calculate performance outcomes for TAA participants, such as the reemployment and retention rates. Participants who received training were most likely to be included in states’ TAPR data, but those who had training waivers and had not received training were least likely to be included. Only 23 of the 46 states we surveyed reported that they are including in their TAPR submissions to Labor all exiting participants, regardless of the type of benefit or service they received. Fourteen states reported that participants who received waivers but did not receive training were unlikely to be included in the TAPR (see fig. 1), and 3 states reported that they do not include any participants unless they receive training. This finding is consistent with a review by Pennsylvania’s state auditor that found that participants who received waivers from training were not included in their TAPR submissions. Our review of the TAPR data states submitted to Labor during fiscal year 2005 confirms our survey results—some states appear to be excluding some of their participants in their TAPR data files. For example, 9 states only included in their TAPR submissions participants who received training. Another 12 states had TAPR submissions composed almost exclusively (97 to 99 percent) of participants who received training (see table 2). However, several states did include relatively more of the participants who had not received training. For example, for 6 states, under 60 percent of the participants reported in the TAPR had received training. We have no other reliable source of data to help us assess what proportion of participants nationwide actually receive training and, therefore, what the proportion in the TAPR should be. In a recent study that examined services and outcomes for five trade-related layoffs, however, we found that between 9 and 39 percent of potentially eligible TAA participants enrolled in training. Excluding certain participants from the TAPR could skew the TAA performance outcomes calculated by Labor because the outcomes may be disproportionately based on participants who received TAA-funded training. Labor does not have a process in place to ensure that states are including in their TAPR submissions all exiting TAA participants. Labor’s regional offices may review whether states’ TAPR submissions are complete during their state monitoring visits. However, because Labor has not had a standard monitoring tool, there has been no assurance that the regional offices were consistently reviewing whether all exiting participants are reported in states’ TAPR data. Labor officials tell us that they are currently developing a core monitoring guide, but it is not clear if the guide will address this issue. Despite the importance of accurately identifying exiters, the exit dates themselves may not be accurate because some states do not consistently obtain proper documentation to verify the dates. Accurate exit dates are critical to TAA performance data for two reasons. First, a participant’s exit date determines if the individual should be included in the state’s TAPR submission to Labor. Second, the timing of the date of exit determines when a participant’s employment outcomes will be assessed. Labor’s guidance requires that states have documentation for participants’ exit dates but does not specify the type of information that needs to be included in the documentation. For example, for participants who received training, it does not specify that the documentation should demonstrate that training was actually completed. Such documentation could include certificates of training completion, attendance records, or reports from training providers. TAA officials in 4 of the 5 states we visited said they had a process for obtaining documentation to show that participants completed training, but it is not clear whether such processes are uniformly followed by states. Officials in 3 states said that they receive training certifications, either from participants or from trainers, that show that training was completed. In another state, a TAA official said that the state sends participants a follow-up survey after training to verify that the training was completed, but some participants do not return the survey. Officials in 1 of the 5 states we visited said they did not have a process for certifying or documenting that participants completed training. A recent review in 4 other states by Labor’s Office of Inspector General (OIG) confirmed that states do not have effective processes for verifying exit dates. In its review of 150 TAA case files, the OIG found that there was no documentation in any of the reviewed files to verify that the participants had completed the program on the recorded date of exit. OIG reported that states often recorded an anticipated date of exit when participants first entered the program, but did not collect any further documentation to confirm that participants had completed the training, and if so, whether they had completed training on the originally recorded date. The OIG recommended that Labor ensure that states collect and record TAA participants’ actual date of exit, maintain the source documentation for such exit dates, and make the documentation readily available for review. According to an OIG official, Labor had not implemented these recommendations as of January 2006. Some states are not using all available data sources to determine TAA participants’ employment outcomes. Labor requires states to use UI wage records to determine the employment outcomes of participants reported in the TAPR. However, each state’s wage record database includes only wage data on workers within the state and does not have data on participants who found employment in another state. To help track employment outcomes of TAA participants across state lines, states can obtain their employment and earnings information using other methods. Labor encourages states to use WRIS, a data clearinghouse that makes UI wage records available to participating states seeking information on TAA participants who may have found employment outside their state. Thirty-four of the 46 states we surveyed reported that they routinely use WRIS to obtain employment outcome data on former TAA participants (see fig. 2). Three states reported that they do not use WRIS but instead routinely use interstate agreements with individual states to obtain employment outcome data. Opting to use interstate agreements with individual states instead of using WRIS is likely to result in access to fewer states’ UI wage records than states would have if they used WRIS and may result in lower reported outcomes. Seven states use only their own states’ UI wage records to determine participants’ employment outcomes. State TAA officials cited several reasons for not using WRIS, including that it took too long to receive the needed information and it was not a priority for the state. Six states that do not currently use WRIS said that they plan to begin using this system in the future. Nearly half of the 46 states are not routinely using other supplemental information sources even though it may be the only way to collect outcome information for certain participants. UI wage records, which cover about 94 percent of workers, do not include some categories of workers, such as self-employed persons, most independent contractors, military personnel, federal government workers, and postal workers. To document the employment status of these workers in the TAPR, states can use supplemental data, such as pay stubs and follow-up surveys sent to participants after they leave the program. Using supplemental data is likely to provide a more complete picture of participant outcomes because it helps states avoid inaccurately recording participants as unemployed in the TAPR. In an earlier report on WIA performance data, 23 of the 50 states told us they needed to use supplemental data in order to meet their expected performance levels for the reemployment measure under WIA. Twenty-two states reported that they rarely if ever collect supplemental data to obtain outcome information on TAA participants (see fig. 3). State TAA officials said that they did not collect supplemental data because states’ TAA IT systems lacked the capacity to record supplemental data; they judged data collected through UI wage records and WRIS as sufficient, or collecting supplemental data was not required; and they lacked sufficient resources. Some states reported IT system limitations that could hinder the states’ ability to ensure their TAA data are complete and accurate. GAO, Workforce Investment Act: States and Local Areas Have Developed Strategies to Assess Performance, but Labor Could Do More to Help, GAO-04-657 (Washington, D.C.: June 1, 2004). data on employment outcomes into their TAPR data rather than electronically transferring the data from the UI wage record file. Using manual rather than automated processes increases the opportunity for errors to be introduced into the data through data entry. Six states responding to our survey expressed concern that errors in data entry may be one of the main causes of incomplete or inaccurate TAA data. One state’s process for manually compiling the TAPR illustrates opportunities to introduce errors into the data. In this state, staff at the state level enter data on TAA participants’ training contracts into a contract database. To compile the TAPR, they identify participants in the contract database whose training was scheduled to be completed during the quarter covered by the TAPR, and they enter data on those participants into a new spreadsheet. To identify employment outcomes for the TAPR, the staff look up the exiting TAA participants on printouts from the state’s UI wage record system and manually enter data on the participants’ employment status and wages into the spreadsheet. The data from the spreadsheet are then converted into the TAPR reporting format and sent to Labor. Limited IT system capabilities. Many states’ IT systems for compiling TAA data do not have certain IT system capabilities, such as performing edit checks, that help a state report complete and accurate data to Labor. Only 15 of the 46 states we surveyed had TAA IT systems with each of three such capabilities: Performing edit checks to prevent data errors: Edit checks aid in identifying invalid data, such as an entry in a date field that is not a date. Identifying dates TAA participants completed WIA-funded services: The ability to identify when TAA participants complete WIA-funded services can help ensure that TAA participants are not counted in the TAPR report while they are still receiving services under WIA. If participants are still receiving services, then it is too soon to assess their employment outcomes in the TAPR report. Allowing staff to query the system to assess data reliability and completeness: Queries allow staff to pull certain information out of the system to answer questions, and without this capability, staff may not be able to properly assess data quality and diagnose data problems. For example, in one local area we visited, a TAA specialist who is responsible for reporting on numerous TAA participants described having great difficulty determining if training completion dates had been entered for participants as appropriate because the specialist could not query the system to get a list of participants and their training status. More than half of the states told us they had plans to make at least one of these system improvements during the next year. For example, 17 states reported plans to improve their TAA IT systems’ capability to perform edit checks (see fig. 4). Some states with electronic systems may have the capability to track TAA participants across other programs serving them, but several do not. Out of the 37 states that told us they had electronic systems to compile their TAPR data, 29 states said this same system captures program information for WIA programs, and similarly, 29 said the system captures information for Employment Services. Thirteen states said that these capabilities extended to all six programs and benefits that we examined, which includes, in addition to WIA and ES, Trade Readjustment Allowance, National Emergency Grants, Veterans Employment and Training Program, and UI. See appendix II for a complete listing of states’ systems linkages. In addition, several states commented on our survey that they have planned enhancements to their TAA IT systems that may help coordinate across programs and increase the likelihood of capturing more outcomes: Improving coordination of data across programs: Six states reported planning changes, such as developing a single case management system for several programs that would allow more coordination of data across programs. Transitioning from manual to electronic processes: Two states that have been using manual processes reported plans to develop electronic interfaces to capture needed data for the TAPR. Adding capacity to record supplemental data: Two states reported that their IT system changes will enable them to begin recording supplemental data for use in determining TAA participants’ employment outcomes. Some states reported that limited TAA administrative funds hindered their ability to ensure the quality of the TAA performance data they collect and maintain. To cover their TAA program’s administrative costs, states receive an allocation each year equal to 15 percent of their TAA training allocation. In fiscal year 2006, 9 states received less than $100,000 in TAA administrative funds, and another 10 states received between $100,000 and $300,000. These funds are used to cover all the administrative activities of the program, such as reviewing waivers and training plans, processing applications for job search or relocation allowances, and any associated data collection and reporting. Some states also use these funds for direct case management services to participants because they are the only TAA funds available to provide these services. However, we recently reported that state officials told us the TAA administrative funds were often insufficient to meet the case management needs of the program and they relied on other programs to provide those services. (For a complete listing of each state’s TAA training and administrative funds, see app. III.) State and local TAA officials said that resource shortages contribute to difficulties in identifying exit dates, using supplemental data sources, and entering data in a timely manner. For example, one state official commented on our survey that TAA case managers often do not have enough time to follow up with participants to learn about their status after they have been sent to training. Another state official said that insufficient case management can delay the identification of participants exiting the program. Officials in 2 other states told us that supplemental data were too time-consuming and burdensome to collect, given the program’s current funding levels. Officials said that resource limitations also presented challenges in entering the data in a timely manner. An official in one local area we visited reported a tremendous backlog in entering TAA participant data into the IT system because there were just two staff to handle approximately 1,000 TAA cases. Similarly, in another local area, the office manager told us that TAA staff were spread too thinly, a condition that adversely affected the collection and entry of TAA data. Labor reports data on TAA petition and certification activity, program participation, and key performance measures, but this information may not be useful for gauging current program performance. The information may be helpful in providing a long-term national picture of program outcomes, but it represents past, rather than current, performance. UI wage records—the primary data source for tracking TAA performance— provide a fairly consistent national view of TAA performance and allow for tracking outcomes over time. At the same time, the UI wage records suffer from time delays and, together with the use of longer-term outcome measures, affect the timing of states’ performance reports to Labor and, subsequently, the information that Labor makes publicly available. Most of the outcome data reported in a given program year actually reflect participants who left the program up to 2 years earlier. In addition, Labor does not consistently report TAA data by state or industry or by services or benefits received—a step that would make the data more useful to policymakers. States responding to our survey reported that they would like additional information from Labor, such as how their TAA performance compares to the performance of other states and other federal employment and training programs. Labor makes some TAA statistics available through postings on its Web site and through published reports, but they do not provide useful information on current performance. Labor provides some TAA activity and participant data by fiscal year including number of petitions received, certifications issued, and denials by state; distribution of certifications by industry; number of new participants receiving extended income support payments or training; and summary statistics on former TAA participants (such as race, education level, and benefits and services received). In addition to reporting on TAA activity and participant data, Labor also reports on three key TAA performance measures. The TAPR data submitted by states are used to calculate national and state outcomes on the TAA performance measures—wage replacement, reemployment, and retention—for each fiscal year. In 2005, Labor made state-by-state TAA outcome information publicly available for the first time. According to Labor officials, making this information public represents an effort to emphasize performance, and they intend to post state-by-state outcome information on the Web site for all future fiscal years. Labor’s regional offices directly provide states with information on their TAA performance relative to the program’s national goals. Some regional offices also provide states with reports showing the performance of all states in the region, according to officials we interviewed. However, the information Labor makes publicly available may not provide a clear picture of current TAA performance because, in addition to being incomplete and perhaps inaccurate, the data represent past performance and are not consistently reported by type of service, state, or industry. Data represent past performance. Because TAA performance is measured using UI wage records and long-term performance measures such as employment retention, the most up-to-date TAA performance data currently available may represent performance from several years in the past. Use of wage records: Using UI wage records to measure outcomes provides a common yardstick for making long-term comparisons across states because they contain wage and employment information on most workers. At the same time, these files suffer delays between the time an individual gets a job and when this information appears in wage records. State procedures for collecting and compiling wage information from employers can be slow and time-consuming. Data are collected from employers only once every quarter, and employers in most states have 30 days after the quarter ends to report the data to the state. For example, the wage report for the last calendar quarter of the year (ending on December 31) is due to the state on January 31. We previously reported that for the majority of states, the delay between the time an individual gets a job and the time this information appears in wage records is up to 4 months. Design of measures: In addition to using a job placement measure, Labor also uses two longer-term measures to gauge TAA performance—an earnings measure and a job retention measure. These measures may be useful for assessing how well the program is meeting its long-range goals to increase the employment, retention, and earnings of participants. However, the use of these measures requires states to wait from one to three quarters after participants exit the TAA program before measuring the outcomes. For example, although states record whether participants entered employment in the first quarter after exit, two more quarters must elapse before employment retention is measured. Participants who exit the TAA program have their outcomes assessed in the first, second, and third quarters after exit. However, data to measure all outcomes are not available until the fifth quarter after exit, and the outcomes are not submitted to Labor until midway through the sixth quarter. Figure 5 illustrates the time it takes before a TAA participant would be included in performance outcome calculations. While approximately one-third of the states found TAA performance information they currently receive from Labor to be greatly useful, some would like Labor to provide them with additional information to help manage their program. Nearly half of the 46 states we surveyed told us that they find the performance information they receive from Labor to be moderately useful (see fig. 7), and 8 states reported that Labor’s TAA performance information is of little or no use for program management. Nearly half of the states we surveyed told us that they routinely develop information on their own performance beyond what they submit to Labor. For example, an official in 1 state reported that it calculates its own outcomes before receiving them from Labor in order to make managers and executives aware of the state’s performance, and it uses this information to engage state and local TAA staff in making program adjustments. In addition, approximately one-third of the states routinely develop information on their local areas’ performance. Labor does not provide analysis of local area performance to states because it does not collect this type of information in the TAPR. While many states provide the performance information of their own state and that of other states to their local area TAA staff, few states provide information on their local areas’ performance to local TAA staff. Only 27 of the 46 states in our survey reported that they share information from Labor with local area staff on how their state’s performance compares to national TAA performance goals. In addition, only 7 of the 16 states that generate additional performance information for local areas reported that they share this information with local TAA staff. One expert we spoke with told us that regularly sharing performance information with local program staff enables them to understand how the data they collect are being used and the importance of complete and accurate data for producing reliable performance information. Our recent report on performance measurement also noted that frequent and routine communication of performance information helps program staff and stakeholders use such information to accomplish program goals as they pursue day-to-day activities. These practices could lead to better program management and produce more reliable performance data to assess TAA performance in the future. States said that they would like to receive additional performance information from Labor to help manage their TAA program. Thirty-four states would like more information than they currently receive on their own state’s performance, and 39 states reported they would like information comparing their states’ TAA performance to their WIA Dislocated Worker performance (see fig. 8). According to one state official we spoke with, receiving additional TAA performance information that is displayed by type of service and by state would enable officials to respond more effectively to performance problems and to learn what strategies states with similar TAA populations are using to achieve different performance outcomes. While it has limited authority to hold states accountable, Labor has taken steps to improve the quality of TAA data states submit, but these steps do not fully address all issues. Labor has no mechanism to sanction states for poor performance or poor-quality data because the law and current regulations do not provide one. However, Labor has begun an initiative that requires states to review a sample of their data for accuracy. It is too soon to fully assess whether Labor’s efforts have improved data quality, but most states reported on our survey that Labor’s new requirements have increased awareness of data quality at the state and local levels. States also report that they would like more opportunities to share lessons learned about issues related to data quality. Labor is requiring changes in some TAA performance measures to align them with measures for other federally funded job training programs. Many states reported that the changes are burdensome, and some states are experiencing delays in implementing the changes. To address data quality concerns, Labor developed a process for states to use to validate the TAPR data they submitted to Labor. Starting with data submitted in fiscal year 2003, Labor required states to review a sample of participants’ records and compare what was reported for certain data elements to data in source files. State staff review the source files and record whether each data element is supported by source documentation and, therefore, passed data validation. If the source files show a data element was incorrect or was not supported with documentation, the data element fails. States use Labor’s software to calculate error rates, and they submit the results to Labor. While it is too soon to assess whether Labor’s data validation efforts have improved data quality, many states said that the efforts are having a positive effect. Thirty-five states reported that efforts have improved the accuracy of the data. Thirty-seven of the 46 states told us they have helped increase the awareness of data quality at the state level, and 25 states told us they have improved awareness at the local level (see fig. 9). Until recently, Labor has not had a standard process for ensuring that states performed data validation correctly. Labor officials tell us, however, that beginning in 2006, regional offices are conducting data validation compliance reviews of a subsample of validated records to ensure that the records were accurately validated and the files contained all required source documents. While states report that Labor’s data validation requirements are having some positive effects, Labor’s data validation efforts do not address two key problems. First, guidance for data validation defined for the first time the type of source documents needed to validate TAPR data elements, including exit dates. However, the guidance does not specify that the source documents for training completion dates should show that participants actually completed training. Second, data validation does not provide for assessing whether TAPR submissions are complete. Because the data validation process only covers participant records included in states’ TAPR submissions for the year, it does not look beyond those records to determine whether all exiting participants were included. In addition to implementing data validation, Labor has taken various actions to better instruct states and to provide tools for improving the data they submit to Labor. Technical assistance and training: In 2005 and 2006, Labor brought together state TAA staff for training conferences on the new data requirements for implementing common measures. According to Labor officials, Labor’s regional offices periodically hold roundtables with states to discuss issues that sometimes include data quality. Labor provides technical assistance, as needed, to states through telephone calls and e-mails. According to Labor officials, Labor is planning to start holding quarterly conference calls with states about TAA issues, including data quality. Guidance on data reporting: Labor issued guidance and instructions for TAA data reporting, such as instructions defining how “date of exit” is to be determined under common measures. In May 2005, Labor issued a guidance letter to states addressing several issues with data quality, such as the use of WRIS and supplemental data to determine employment outcomes. In general, states reported that the guidance and training they had received from Labor provided a clear understanding of certain data requirements, such as the requirements for data validation and for using UI wage records. States were somewhat less likely to say that Labor had provided a clear understanding of the documentation needed for the date of exit and how supplemental data could be used to document TAA employment outcomes. Monitoring: Labor’s regional offices conduct monitoring visits to review states’ TAA programs. In the past, Labor did not have a standard protocol for these monitoring visits, and the monitoring did not always cover the quality of the TAA data being submitted by states. However, as of March 2006, Labor was developing a standard monitoring guide for its regional staff. Pilot project on federal employment data: Labor collaborated with the Office of Personnel Management, the U.S. Postal Service, and the Department of Defense to create a pilot data exchange system to provide states access to wage record information on federal and military employment. The system that began operating in November 2003 can help states obtain more complete employment outcome data on participants who exited job training programs because it provides information on federal employment that is not available in state UI wage records. Many states are using the system to help determine employment outcomes for job training programs, such as those funded under the Workforce Investment Act. However, only 3 of the 46 states we surveyed reported that they were routinely using this system to obtain employment outcomes for the TAA program. Despite Labor’s efforts to improve data quality, most states would like more help. Most states reported that they do not currently have opportunities to share lessons learned with other states on topics related to TAA data quality, such as how to use supplemental data, and they expressed interest in having such opportunities. For example, 29 states told us they do not currently have opportunities to share lessons learned on data validation, and 44 states told us more opportunities to do so would be helpful (see fig. 10). In response to an OMB initiative, Labor made changes to some of the TAA performance measures and to TAA reporting requirements in order to implement common measures (see table 3). OMB established a set of common performance measures to be applied to most federally funded job training programs that share similar goals. Labor further defined the common measures for all of its Employment and Training Administration programs and required states to start reporting TAA data under the revised requirements in fiscal year 2006. Moving to common measures may increase the comparability of outcome information across programs and make it easier for states and local areas to collect and report performance information across the full range of programs that provide services in a one-stop system. Prior to common measures, many federal job training programs had performance measures that tracked similar outcomes but had variation in the terms used and the way the measures were calculated. For example, the programs used different time periods to assess whether participants got jobs. Under common measures, the time period used to assess employment outcomes is uniform across all covered programs. Implementation of common measures involved some changes in the data states collect for the TAPR: Standardized exit definitions: Labor’s guidance on common measures provides for a clearer understanding of when TAA participants should be exited from the program than did earlier TAA guidance. Under Labor’s guidance, states must wait 90 days after participants receive their last service or benefit—from TAA, WIA, or other related programs—to record them as exiters. Prior to this change, states could exit participants without waiting 90 days. Most states reported that the guidance and training they received from Labor provided a clear understanding of the definition of exit under common measures, but 7 states disagreed. Coordination of exit dates: Under common measures, states are encouraged to establish a common exit date for each participant who is co-enrolled in more than one program. For example, if a participant receives services under TAA and under WIA, then the two programs should use the same exit date for the participant. Coordinating exit dates improves data quality by avoiding the problem of counting a participant as unemployed in the program’s performance measures when, in fact, the participant is still receiving services in another program and is not ready to be counted in the performance measures. Changes in IT systems: A number of data fields were added or changed in the TAPR as part of the new common measures policy, requiring states to add or change data fields in their IT systems and to instruct staff on changes in data to be collected on participants and employment outcomes. Most states reported that the guidance and training they received from Labor provided a clear understanding of the changes needed in the TAPR to implement common measures; however, 7 states disagreed. Although moving to common measures may ultimately make it easier for states to collect and report performance information across programs, most states reported that making changes to implement common measures had been a burden in terms of time and cost (see fig. 11), and often viewed coordinating exit dates as burdensome. States were nearly evenly divided in their views, however, on whether they had been given sufficient time by Labor to complete the changes. Nineteen states said they had not been given sufficient time, while 18 states said they had. Twenty- six states reported that they will have provided guidance to staff or changed data elements in their IT systems by the time the first quarterly TAPR is due in fiscal year 2006 (see fig. 12). Other states reported that they would have these changes completed sometime later in 2006, while some states said they could not estimate when they will complete the changes. Coordinating exit dates was the change that states considered the most burdensome. Seventeen states were unable to estimate when they would be able to coordinate exit dates across programs. In a previous study, we cautioned that rushed implementation of reporting changes may not allow states and local areas enough time to fully meet the requirements and could negatively affect the data quality of the information reported. Since the passage of the TAA Reform Act of 2002, the TAA program has evolved to become one of the most important means to help the workers affected by our nation’s trade policies rejoin our nation’s workforce. The program has seen substantial increases in the population it serves and in the funds available to serve them. Unfortunately, efforts to monitor the program’s performance have not kept pace with the program’s development. Four years after the passage of the reforms, we still do not know whether the program is achieving what lawmakers intended. The TAA program has suffered a history of problems with its performance data that have undermined the data’s credibility and limited their usefulness. And while we see that Labor has taken some steps aimed at improving the performance data, the data remain suspect. They fail to capture outcomes for some of the program’s participants, and many participants are not included in the final outcomes at all. These failures may have contributed to the program’s poor performance in achieving its national goals. Labor lacks the authority to hold states accountable for their outcomes or for the quality of their data, and as a result, some states may not see the value of investing more effort to ensure their data are complete and accurate. In truth, officials tell us the funding to support their efforts is small, and it fluctuates from year to year, making such an investment difficult to sustain. But the success of the program is being judged by the outcomes the program achieves and whether or not it meets its goals. The current budgetary environment makes it risky not to take all necessary steps to ensure that the outcomes are an accurate and credible reflection of the program’s performance. Labor has taken a major step toward improving the quality of its performance data through its new data validation requirements. States report that these requirements have significantly raised the awareness of data quality at the state and local levels–an essential component in any effort to improve the accuracy of the data. But these efforts do not fully address all issues. No steps have been taken to ensure that all participants are included in the TAA performance data or that exit dates are adequately documented. Monitoring can help address data issues, but Labor is just now developing a standard monitoring guide that would help ensure that key problems are identified during monitoring visits. Until these steps are complete, the data can not be verified and may remain incomplete. Providing opportunities for states to share lessons learned may make states more aware of effective approaches for ensuring data quality, and several states expressed an interest in more such opportunities. Labor has recently improved the availability of TAA performance information by posting the information on its Web site and by making some state-by-state performance data available. However, the performance data are not as informative as they could be because they aggregate all participants and do not show the outcomes of participants based on the types of services they received. As a result, policymakers lack the information they need to understand program participation and performance and to assess future needs. While Labor has taken steps to share information with states and to improve data quality, more work is needed. To help ensure that TAA participant data reported by states are consistent, complete, and accurate, Labor should clarify through guidance and other communications with states that all participants who exit the program should be included in the TAPR and the documentation needed to verify the training completion date; ensure that the core monitoring guide currently under development for regional office site visits includes guidance for assessing whether states’ data collection processes for performance reporting capture all participants; and provide states with opportunities to share lessons learned with other states on issues that may affect data quality. To make TAA performance information more useful for program management, Labor should provide this information by the type of services received by TAA participants. We provided a draft of this report to Labor for review and comment. In its comments, Labor did not disagree with our findings and recommendations and said the report will be helpful in its continuing efforts to improve the quality of TAA performance data. Labor noted that the issues raised in the report about administrative costs and the burden of new reporting requirements are compounded by having a workforce investment system that is duplicative in its service delivery design, resulting in separate record-keeping and reporting systems. Labor also identified a number of actions that it is taking to ensure that performance accountability is an expectation of the program. A copy of Labor’s response is in appendix IV. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies of this report to the Secretary of Labor, relevant congressional committees, and others who are interested. Copies will also be made available to others upon request. The report is also available on GAO’s home page at http://www.gao.gov. If you or members of your staff have any questions about this report, please contact me at (202) 512-7215 or nilsens@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. We examined (1) whether the Trade Adjustment Assistance (TAA) performance data provide a credible picture of the program’s performance, (2) what TAA performance data Labor makes available to the public and states and the usefulness of the data for managing the program, and (3) what Labor is doing to address issues with the quality of TAA data submitted by states. To learn more about the factors that affect TAA data quality and to learn what states are doing to ensure data quality, we conducted a Web-based survey of state TAA officials and conducted site visits in five states, where we interviewed state officials and visited local areas or one-stop centers. We also collected information on the quality of TAA data through interviews with Department of Labor officials in headquarters and all six regional offices, nationally recognized experts, and reviewed relevant literature. Our work was conducted between December 2004 and March 2006 in accordance with generally accepted government auditing standards. To determine the factors that affect the quality of TAA performance data, we conducted a Web-based survey of workforce officials in the 46 states that were allocated TAA funds in fiscal year 2005, and we obtained a 100 percent response rate. These officials were identified using Labor’s list of state TAA officials. We e-mailed the contacts, and they confirmed that they were the appropriate contact for our survey or identified and referred us to another person at the state level. Survey topics included (1) the current status of TAA data collection and reporting systems, (2) implementation of the U.S. Department of Labor’s data validation requirements, (3) state and local efforts to ensure the quality of TAA data, and (4) the implementation of common measures. The survey was conducted using a self-administered electronic questionnaire posted on the Web. We contacted respondents via e-mail announcing the survey, and sent follow-up e-mails to encourage responses. The survey data were collected between November 2005 and January 2006. We received completed surveys from all 46 states that were allocated TAA funding in fiscal year 2005 (a 100 percent response rate). We did not include Washington, D.C., and U.S. territories in our survey. We worked to develop the questionnaire with social science survey specialists. Because this was not a sample survey, there is no sampling error. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted or in the sources of information that are available to respondents can introduce unwanted variability into the survey results. We took steps in the development of the questionnaire, the data collection, and data analysis to minimize these nonsampling errors. For example, prior to administering the survey, we pretested the content and format of the questionnaire with four states to determine whether (1) the survey questions were clear, (2) the terms used were precise, (3) respondents were able to provide the information we were seeking, and (4) the questions were unbiased. We made changes to the content and format of the final questionnaire based on pretest results. We also performed computer analyses to identify inconsistencies in responses and other indications of error. In addition, a second independent analyst verified that the computer programs used to analyze the data were written correctly. We visited five states—California, Iowa, Ohio, Texas, and Virginia—and traveled to local areas or one-stop centers in each of these states. We selected these states because they represent different TAA data collection approaches (that is, states where data are entered into information technology systems at the local level and those where data are entered at the state level), received a relatively large share of TAA funds in fiscal year 2005, and are geographically dispersed. From within each state, we judgmentally selected local areas to visit (see table 4). In each state, we interviewed state TAA officials about their collection and use of TAA data, IT systems used to compile TAA performance data, and efforts to ensure the data are complete and accurate. Similarly, we interviewed local area officials about their collection and use of TAA data. Information that we gathered on our site visits represents only the conditions present in the states and local areas at the time of our site visits, from January 2005 through October 2005. We cannot comment on any changes that may have occurred after our fieldwork was completed. Furthermore, we cannot generalize the findings from our site visits beyond the states and local areas we visited. In our survey, states were asked whether the IT system they use to compile data for the Trade Act Participant Report (TAPR) currently captures program information for certain other Labor programs or benefits. The TAA Reform Act authorizes up to $220 million per year for training under the TAA Program. Labor allocates 75 percent of the training funds to states according to a formula that takes into account each state’s previous year allocations, accrued expenditures, and participant levels. Labor holds the remaining 25 percent of training funds in reserve to distribute to states throughout the year according to need. To cover administrative costs associated with training under the TAA program, Labor allocates to each state additional administrative funds equal to 15 percent of its training allocation. Table 6 shows Labor’s initial 75 percent allocation for training and associated administrative expenses. States also receive an additional 15 percent of any reserve (25 percent) funding and job search/relocation allowances for program administration. Dianne Blank, Assistant Director Kathy Peyman, Analyst-in-Charge In addition, the following staff made major contributions to this report: Vidhya Ananthakrishnan, Melinda Cordero, Laura Heald, Adam Roye, and Leslie Sarapu served as team members; Amanda Miller and Carolyn Boyce advised on design and methodology issues; Rachael Valliere advised on report preparation; Jessica Botsford advised on legal issues; Lise Levie verified our findings. Trade Adjustment Assistance: Most Workers in Five Layoffs Received Services, but Better Outreach Needed on New Benefits. GAO-06-43. Washington, D.C.: January 31, 2006. Workforce Investment Act: Labor and States Have Taken Actions to Improve Data Quality, but Additional Steps Are Needed. GAO-06-82. Washington, D.C.: November 14, 2005. Workforce Investment Act: Substantial Funds Are Used for Training, but Little Is Known Nationally about Training Outcomes. GAO-05-650. Washington, D.C.: June 29, 2005. Unemployment Insurance: Better Data Needed to Assess Reemployment Services to Claimants. GAO-05-413. Washington, D.C.: June 24, 2005. Workforce Investment Act: Labor Should Consider Alternative Approaches to Implement New Performance and Reporting Requirements. GAO-05-539. Washington, D.C.: May 27, 2005. Trade Adjustment Assistance: Reforms Have Accelerated Training Enrollment, but Implementation Challenges Remain. GAO-04-1012. Washington, D.C.: September 22, 2004. Workforce Investment Act: States and Local Areas Have Developed Strategies to Assess Performance, but Labor Could Do More to Help. GAO-04-657. Washington, D.C.: June 1, 2004. National Emergency Grants: Labor Is Instituting Changes to Improve Award Process, but Further Actions Are Required to Expedite Grant Awards and Improve Data. GAO-04-496. Washington, D.C.: April 16, 2004. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002. Trade Adjustment Assistance: Experiences of Six Trade-Impacted Communities. GAO-01-838. Washington, D.C.: August 24, 2001. Trade Adjustment Assistance: Trends, Outcomes, and Management Issues in Dislocated Worker Programs. GAO-01-59. Washington, D.C.: October 13, 2000. | In the current tight budgetary environment, program performance is likely to be an increasingly significant factor used to help policymakers assess programs and determine funding levels. Given concerns over the quality of performance data for the Trade Adjustment Assistance (TAA) program and the importance of having meaningful information to assess program performance, we examined (1) whether the TAA performance data provide a credible picture of the program's performance, (2) what TAA performance data the Department of Labor (Labor) makes available to the public and states and the usefulness of the data for managing the program, and (3) what Labor is doing to address issues with the quality of TAA data submitted by states. The performance information that Labor makes available on the TAA program does not provide a complete and credible picture of the program's performance. Only half the states are including all participants, as required, in the performance data they submit to Labor--states were more likely to report participants who received training than those who received other benefits and services but not training. In addition, many states are not using all available data sources to determine participants' employment outcomes. This may result in lower reported outcomes because states may be inaccurately recording some workers as unemployed who actually have jobs. To compile their performance data, some states are using manual processes or automated systems that lack key capabilities to help minimizes errors, but many states have plans to improve their systems' capabilities. Labor reports data on TAA activity levels, services provided to TAA participants, and key performance measures. The performance data may be useful for providing a long-term national picture of program outcomes, but it represents participants who left the program up to 30 months earlier and, thus, is not useful for gauging the TAA program's current performance. Also, the performance information is not displayed using categories that would be informative to policymakers, such as type of service received and industry of dislocation. Most states find the performance information they receive from Labor to be at least moderately useful, but many want more information. Labor has taken steps to improve the quality of TAA performance data, but issues remain. In 2003, Labor began requiring states to validate their data, and most states reported that this increased awareness of data quality at the state and local level. However, the validation process does not address the problem of participants being excluded from the performance data. In fiscal year 2006, Labor instituted a set of common measures, and many states reported they are experiencing delays in implementing all required changes. States also expressed interest in receiving more opportunities to share lessons learned on topics relevant to TAA data quality. |
Operation Desert Storm marked the first time that the Navy’s Tomahawk Land Attack Missile and the Air Force’s Conventional Air Launched Cruise Missile (CALCM) were used in combat. A total of 323 cruise missiles were fired against a variety of Iraqi targets in the conflict’s early stages. The missile attacks were part of a multiphase air campaign designed to decapitate the Iraqi leadership, gain air superiority, and reduce Iraqi combat power in preparation for the ground offensive to restore Kuwait’s border. Tomahawk missiles have subsequently struck two Iraqi facilities in the Baghdad area. U.S. ships launched 42 missiles against the Zafraniyah Nuclear Fabrication Facility on January 17, 1993, and 23 missiles against Iraqi Intelligence Service headquarters on June 26, 1993. These attacks demonstrated that cruise missiles could play an important role in both major conflicts and more limited engagements by allowing U.S. forces to strike an adversary with a high degree of accuracy at long ranges and without risking the loss of aircraft or aircrew. The Tomahawk cruise missile is a long-range, unmanned subsonic missile with both land attack and antiship capability that can be employed under a variety of weather conditions. It is launched from a variety of Navy surface ships and attack submarines. There are four Tomahawk variants: the nuclear land attack missile (TLAM-N), the antiship missile (TASM), the conventional land attack missile with a unitary warhead (TLAM-C), and the conventional land attack missile with a submunition warhead (TLAM-D). Each variant employs a common body and propulsion system but is equipped with different warheads and guidance systems. The four variants are shown in figure 1.1. Navy ships and submarines currently deploy predominantly with TLAM-Cs and TLAM-Ds. The TLAM-Ns were removed from the vessels as the Cold War was ending. Navy officials said that the TASM’s mission has been reduced because this variant is not particularly suited to warfare in littoral waters that may be crowded with both combatant and noncombatant ships. The TASM was originally intended as an over-the-horizon, open ocean, antiship weapon to be employed against ships in a battle group. The TLAM-Cs and -Ds employed during Operation Desert Storm and the subsequent strikes were Block II missiles, which make up the majority of the current inventory. All Tomahawks delivered to the Navy since April 1993 have been improved Block III models. The Navy plans production of a Block IV variant by the end of this decade. The improvements incorporated in the Block III missile and those planned for Block IV are discussed in appendix I. The principal difference between the Block II TLAM-C and TLAM-D is the warhead. The TLAM-C carries a 1,000-pound-class unitary warhead, and the TLAM-D carries a submunitions payload consisting of 166 bomblets. The TLAM-C is employed against a single fixed target, such as a specific point on a building, whereas the TLAM-D is designed to attack area-type targets, such as aircraft parked on a ramp. A single TLAM-D missile can dispense its submunitions payload over as many as three separate targets. The Block II TLAM-C and TLAM-D ranges equal or exceed the unrefueled combat radius of most U.S. manned tactical strike aircraft. However, those missiles that are launched from a submarine torpedo tube have a range about 30 percent less than surface ship-launched missiles. The submarine-launched Tomahawk has a booster rocket that propels the missile to the surface after it leaves the torpedo tube. The missile is partially de-fueled to compensate for the booster’s added weight, which decreases its range. However, the Navy has begun procuring an improved booster that allows submarines to launch fully fueled missiles. The Tomahawk missile follows a pre-programmed route over specific terrain features to its target using a combination of terrain contour matching (TERCOM) and digital scene matching and area correlation (DSMAC). The Tomahawk’s flight profile is illustrated in figure 1.2. During the initial portion of its flight, the missile navigates by TERCOM. A radar altimeter aboard the missile periodically scans the terrain over which the missile is flying. The on-board computer then compares the resulting terrain elevation profile to its profile of the predicted route to the target, which was stored in the computer before the missile’s launch. The computer then adjusts the missile’s course so that it is following the planned route to the target. The Tomahawk navigates by DSMAC during the terminal leg of its flight to the target. The DSMAC process uses an optical sensor in the missile that scans the ground over which the missile is flying. The on-board computer converts the scanned image of the ground features into an image of black and white contrasts. The computer then compares that image to its stored DSMAC black and white images of the selected sites along the route. As with TERCOM, the missile’s computer then adjusts the missile’s course so it is following the preplanned route. The Block II missile uses inertial navigation between TERCOM and DSMAC update points. Currently, 60 surface ships and 75 submarines are capable of launching Tomahawk missiles. The Navy projects that by 1999 the Tomahawk-capable force will consist of 82 ships and 55 submarines. Table 1.1 shows the projected Tomahawk platform force. The planned wartime loads of TLAM-C and TLAM-D missiles vary for surface ships and submarines. Navy officials said that notional missile loads are used for planning purposes; actual loads could vary depending on the specific mission assigned by the operational commander. The Tomahawks share launcher space with other missiles on several ship classes. Arleigh Burke class destroyers and Ticonderoga class cruisers are predominantly loaded with standard surface-to-air missiles. Because attack submarines have limited weapon storage space, all TTL Tomahawks carried displace an equal number of torpedoes. The Navy plans to continue procuring Block III TLAM-C missiles until fiscal year 1998. It plans to procure 216 missiles in fiscal year 1994 at a cost of about $1.2 million per missile and 217 missiles per year from fiscal years 1995 to 1998. The Navy currently plans to begin production of the Block IV missile through the remanufacture of existing Block II missiles and TASMs after fiscal year 1998. The Air Force’s CALCM was the other U.S. cruise missile used in the Gulf War. The CALCM is a modification of the nuclear-armed Air Launched Cruise Missile, which is a subsonic, all-weather cruise missile. During the modification process, certain components are removed, and a conventional warhead is installed. A Global Positioning System navigation capability is also added. The resulting weapon has a circular error probable (CEP) roughly twice that of the Block III TLAM-C. The CALCM is carried by B-52 bombers and is launched within range of the target. After launch, CALCM follows a preplanned route to its target, using inertial navigation with Global Positioning System updates. The missile’s mission can be changed or updated by the flight crew while the B-52 is airborne. New or updated missions can be transmitted to the aircraft from the air base and then loaded into the missile’s computer. This process allows CALCM missions to be changed or updated any number of times before launch. However, once launched, no communications with the missile are possible. The Air Force’s current inventory of CALCM’s includes missiles authorized to replace those used in Desert Storm. Mission planning for the CALCM is performed at Offutt Air Force Base, Nebraska. The average mission planning time for a new target for CALCM is comparable to the planning time for a Block II Tomahawk mission. The completed missions are transmitted to a U.S. air base from which the B-52 CALCM flight will be launched. Bomber preparation and loading can take an average of 24 hours but can be done concurrently with mission planning. Flight times to Iraq during Desert Storm averaged 16 hours. Total response time for CALCM for an unplanned target in Iraq is similar to that of the Block II Tomahawk. We initiated this review because Desert Storm and subsequent Iraqi strikes showed that the Tomahawk and CALCM added a new dimension to offensive air operations. We determined the missiles’ performance during Desert Storm, including any limitations. We also addressed the advantages of these missiles over tactical aircraft and the missiles’ potential impact on the requirements for future tactical weapon systems and forward presence. We met with agency officials responsible for program management and obtained pertinent documents concerning the characteristics, missions, and employment concepts of the Tomahawk cruise missile system and several tactical aircraft systems capable of striking the same types of targets as the Tomahawk. We also obtained information on future planned aircraft and missile systems and planned modifications to existing systems. To gain the operators’ perspective on the Tomahawk, CALCM, other unmanned standoff weapons, and manned aircraft, we met with officials of two unified commands and various Navy and Air Force commands. During those visits, we discussed the commands’ policies and procedures for employing Tomahawk and various precision strike systems. We also visited two Tomahawk-capable ships—the U.S.S. Stump, a Spruance class destroyer, and the U.S.S. Key West, a Los Angeles class attack submarine—and discussed Tomahawk operations with the officers and crews of those vessels. Additionally, we visited the Navy’s Strike Warfare Center and discussed the planning and conduct of carrier strike operations with officials of the Center. To gain insights into the roles and experiences of the various weapons systems in the Gulf War, we interviewed Navy and Air Force officers who participated in Tomahawk and CALCM planning and employment during the Gulf War and the two subsequent Tomahawk strikes on Iraq. We also met with officials who had analyzed the planning and preparation that preceded the air campaign and the campaign’s results. Additionally, we reviewed various studies and reports concerning the campaign, including the major reports directed by the Navy and the Joint Chiefs of Staff on the performance of Tomahawk in Desert Storm and the two 1993 raids on Iraq. We analyzed data from the Air Force-sponsored Gulf War Air Power Survey and met with officials who performed this survey to develop information concerning the number and types of targets attacked and the aircraft and missile systems used to conduct those attacks. We limited our analysis to the Tomahawk and CALCM missiles and precision strike systems and the targets they attacked. At our request, Navy and Air Force officials performed several analyses that compared the effectiveness of various weapon systems against a selected group of targets. We performed our work at the following locations: In the Washington D.C., area Office of the Secretary of Defense Office of the Chairman, Joint Chiefs of Staff Office of the Chief of Naval Operations Headquarters, U.S. Air Force Cruise Missile Project Office Defense Intelligence Agency Naval Research Laboratory Advanced Research Project Agency Naval Maritime Intelligence Center Center for Naval Analysis Center for Air Force History In the Norfolk, Virginia, area Headquarters, U.S. Atlantic Fleet Cruise Missile Support Activity, Atlantic Operational Test and Evaluation Force U.S.S. Stump U.S.S. Key West Air Combat Command In the Honolulu, Hawaii, area Headquarters, U.S. Pacific Command Headquarters, U.S. Pacific Air Forces Headquarters, U.S. Pacific Fleet Cruise Missile Support Activity, Pacific At MacDill Air Force Base, Florida Headquarters, U.S. Central Command Headquarters, Navy Central Command Naval Strike Warfare Center, Naval Air Station, Fallon, Nevada Headquarters, Central Air Forces, Shaw Air Force Base, Sumter, We performed our work from August 1992 to December 1993 in accordance with generally accepted government auditing standards. The Department of Defense (DOD) provided written classified comments on a classified draft of this report. DOD partially concurred with the major findings of the report, but it disagreed with our recommendations. Unclassified summaries of DOD’s comments have been included in the report where appropriate. Both the Tomahawk and the CALCM contributed to the success of U.S. combat operations during Desert Storm and the 1993 strikes on Iraq. During Desert Storm, U.S. Navy ships and submarines launched 288 Tomahawk missiles, and Air Force B-52 bombers launched 35 CALCMs, all against targets in Iraq. The missiles were used against a wide range of targets that included predominately electrical production facilities; Scud missile facilities; command, control, and communications facilities; and leadership targets. Many of these targets were similar to those attacked by manned aircraft. Cruise missiles struck fixed, heavily defended strategic targets that if attacked by manned aircraft—particularly non-stealth aircraft—could have resulted in the unacceptable loss of aircrews and aircraft. The cruise missiles also struck targets at ranges that would have required manned strike aircraft to refuel. According to studies conducted by the Center for Naval Analyses and the Defense Intelligence Agency, Tomahawk missiles and CALCMs hit their intended aim points with success rates approaching those of manned precision-strike aircraft, such as the F-117A stealth fighter. The Tomahawk’s performance improved in the 1993 raids on the Zafraniyah nuclear facility and Iraqi intelligence headquarters. The success rate was about 26 to 35 percent higher for the Zafraniyah raid, and 20 to 29 percent higher for the raid on Iraqi Intelligence Headquarters, than the Tomahawk’s success rate during Desert Storm. Desert Storm also demonstrated several limitations in the design and employment of both missiles. Tomahawk operations were hampered by the stringent geographic information requirements to support the missile’s navigation systems and the lengthy mission planning process. The Tomahawk also demonstrated limitations in its range and lethality. The limited ability of the CALCM’s warhead and its guidance system’s lower accuracy (compared with the Tomahawk’s) restricted the types of targets that the CALCM could successfully attack. Improvements have already been incorporated into the Block III Tomahawk variant currently in production and address many of the limitations noted during Operation Desert Storm. The proposed Block IV Tomahawk would further expand the missile’s capabilities. The Air Force is studying a proposal to produce two improved variants of the CALCM that would address limitations observed during Desert Storm, but it has not requested any funds. When Iraq invaded Kuwait on August 2, 1990, U.S. military commanders began drafting plans for an air war against Iraqi targets in the event that Iraq attacked Saudi Arabia before sufficient U.S. ground forces were in theater. The commanders also began developing a four-phase plan to eject Iraqi forces from Kuwait. The first phase was for a strategic air campaign focused initially on decapitating Iraqi military and civilian command and control by a series of attacks against strategically vital targets, followed by attacks against fielded military forces. Five basic categories of targets—command and control, industrial production, infrastructure, population will, and fielded forces—were encompassed in the plan. The most important targets were command, control, and communications targets, which were to be struck forcefully to incapacitate Saddam Hussein’s ability to control his nation, disrupt the Iraqi forces, and induce the Iraqis to withdrawal from Kuwait. Attacks on key production and infrastructure targets would follow to further fracture the country and degrade Iraq’s ability to replenish its forces. Attacks on targets such as television and radio stations and electrical power generation and distribution facilities, would degrade the will of the civilian population. Finally, in preparation for a coalition ground assault, Iraqi forces in the field would be struck. Three other phases were to follow. The intent of the second phase was to gain air superiority, and the third phase was to reduce the capability of the Iraqi ground forces before the coalition ground attack. The fourth and final phase was the coalition ground attack into Kuwait. Figure 2.1 shows the Desert Storm area of operation and distances to target sites. SOMALIA Even though the size and duration of the air plan changed before the start of the air campaign on January 17, 1991, its basic premise remained unchanged. Phase I attacks on Iraqi air defense facilities, the electrical power system, and command and control targets were carried out predominately by the F-117A, F-111F, F-15E, and A-6E aircraft, all of which carried precision munitions such as laser-guided bombs, and by Tomahawk cruise missiles and CALCMs. In total, coalition fixed-wing aircraft launched more than 40,000 individual attacks against targets in Iraq and occupied Kuwait during the campaign. Tomahawks and CALCMs struck heavily defended targets deep in Iraq whose destruction was vital to the success of the Desert Storm air plan and that, if attacked by aircraft, could have led to unacceptable losses of aircraft and aircrews. Most cruise missiles were fired early in the campaign. Navy ships attempted to launch a total of 297 Tomahawk missiles. Of the 288 that were launched, 282 (95 percent) achieved cruise flight and proceeded toward their target. Of the 39 CALCMs carried to launch points by B-52s, 35 (90 percent) were launched and proceeded toward their target. According to data in studies conducted by the Center for Naval Analyses (CNA) and the Defense Intelligence Agency (DIA) and our analysis of Gulf War Air Power Survey data, both missiles achieved results approaching those of manned aircraft, such as the F-117A, during Desert Storm. DOD and Navy officials said that multiple weapon strikes on many of the aim points and a lack of timely battle damage assessment during Desert Storm made it very difficult to determine the effectiveness of the Tomahawk and the CALCM. Target analysts were unable to obtain damage assessments for each aim point after each attack. Since many targets were attacked more than once by both aircraft and cruise missiles, it was an arduous task to determine which attack caused the observed damage in those cases. Additionally, since many aim points were also targeted by multiple missiles, it was difficult for the analysts to determine how many weapons caused the resulting damage. DOD officials said that the missiles’ mission objectives must be taken into account when measuring the missiles’ success. Even if intended targets are not destroyed, a military objective can be met if the targets are rendered unusable or damaged as a result of being struck by some of several missiles targeted against it. Tomahawk missiles struck targets in 8 of the 12 overall target categories.The emphasis placed on the categories changed between the first 2 days and the remainder of the conflict. However, over the course of the campaign, the majority of the Tomahawks that were fired were launched against targets in four specific strategic target categories. According to Navy officials, the 38 target complexes that Tomahawks attacked were all heavily defended and lent themselves to attack by an accurate, unmanned weapon such as the Tomahawk. Many of these targets were similar to those struck by manned aircraft. In many cases, the Tomahawk and manned aircraft not only struck the same categories of targets but also the same complexes. For example, Air Force aircraft launched 355 strikes—239 by the F-117A—against complexes that were also struck by the Tomahawk. Navy aircraft launched 185 strikes against complexes that were also struck by Tomahawk. The Tomahawk’s geographic reach equaled or exceeded that of manned aircraft. Many of the targets the Tomahawk and manned aircraft attacked were located in the same areas of Iraq. The Tomahawk’s range allowed it to strike its targets flying from launch points in the Mediterranean and the Red Seas, and the Persian Gulf. The F-117As, flying from bases in southern Saudi Arabia (see fig. 2.1), were refueled on all missions. In addition, Tomahawk missiles were the only weapons that struck targets in the downtown Baghdad area during daylight for most of the campaign. Even though the Air Force’s F-117As attacked Baghdad-area targets at night throughout the campaign, attacks by other aircraft were stopped after Iraqi ground defenses shot down two F-16s on the second day of the conflict. Thus, the Tomahawk’s use had the added benefit of maintaining psychological pressure on the Iraqis in and around Baghdad. Navy officials believe that some Tomahawks may have been shot down by Iraqi ground-based antiaircraft artillery. However, there appears to be no evidence that surface-to-air missiles contributed to Tomahawk kills. These officials also said that Tomahawk’s flight profile made it difficult for surface-to-air missile systems to successfully identify and attack Tomahawks. The limited number of routes used by Tomahawks to approach Iraqi targets and the tactics employed to ensure coordination with tactical aircraft missions may have contributed to missile losses. Because the usable routes into Iraq were so limited, multiple Tomahawks were launched along the same route. Thus, Iraqi gunners might have come to expect that, once a Tomahawk was sighted, others would soon follow along the same path. As a result, it would have been much easier to identify and engage the missiles that followed. According to a CNA study, the success rate for Tomahawks fired within the first 2 days of the air war was much higher than for those fired later, indicating that the Iraqi gunners might have become accustomed to seeing the missiles using certain routes and flying in stream raids. Despite its limitations, Navy officials said that the Tomahawk’s use provided a clear view of the missile’s performance under arduous conditions. The flat, featureless terrain gave mission planners perhaps the most difficult task possible in creating the TERCOM and DSMAC scenes needed. The hot Middle East climate meant that the Tomahawk’s engine was operating under the harshest possible conditions as well. Officials assert that the conditions under which the Tomahawk operated had to be considered when assessing the system’s performance. As with the Tomahawk, the CALCM contributed to the success of Operation Desert Storm but also demonstrated some limitations. On the first day of the conflict, seven B-52Gs carrying AGM-86C CALCMs took off from Barksdale Air Force Base, Louisiana, on a 35-hour, 14,000-mile round trip. All targets that the CALCMs attacked were in 2 of the 12 categories and consisted of 5 military communications sites and 3 electrical power stations. DIA’s assessment of CALCM’s damage to Iraqi targets concluded mission objectives were achieved against the majority of targets. The cruise missiles were not the only systems that demonstrated limitations during Operation Desert Storm. Not all manned aircraft struck their intended targets. For example, as stated earlier, our analysis of Gulf War Air Power Survey data showed that the percent of the weapons carried aboard the F-117As that took off during the conflict and struck their intended targets was similar to the success rate achieved by the cruise missiles. The F-117As carried munitions on about 1,300 sorties, but they released only about 79 percent of the weapons against their targets.Of those released, a high percentage struck their intended aim points. Poor visibility (overcast, fog, and smoke) limited the F-117A’s ability to use its laser targeting and bomb guidance system. Almost 350 strikes were aborted after takeoff due to bad weather conditions alone. For example, more than half of the F-117A flights were unsuccessful on days two and three of the air campaign because of low clouds. In addition, bad weather halted operations for 2 consecutive nights during the later stage of the air campaign and during the final 2 days of the war. Another 76 strikes were aborted because the pilots had problems identifying the targets. Other aircraft were also affected by various problems. For example, about 780, or 33 percent, of the 2,310 F-111F sorties were aborted before striking their targets because of mechanical and other problems and poor weather. Mechanical problems with various aircraft components, such as inertial navigation systems, air refueling systems, and digital computer complexes, caused nearly 45 percent of the aborts. Poor weather restricted the aircraft’s ability to launch laser-guided bombs and caused nearly 25 percent of the aborts. Since Desert Storm, U.S. forces used the Tomahawk to strike Iraq in two punitive raids. On January 17, 1993, U.S. forces struck the Zafraniyah Nuclear Fabrication Facility, located just outside Baghdad, in response to Iraq’s refusal to cooperate with U.N. inspectors. Even though tactical aircraft were available, the National Command Authorities (i.e., the President and the Secretary of Defense) chose the Tomahawk for the strike because they wanted to avoid the potential loss of pilots or aircraft and unacceptable damage to nonmilitary targets. U.S. forces targeted 8 buildings and fired 46 Block II TLAM-C missiles, 42 of which (91 percent) were successfully launched and transitioned to cruise flight. On June 26, 1993, Tomahawk missiles were used to strike the Iraqi Intelligence Service headquarters complex in the Baghdad area in retaliation for the plot to assassinate former President Bush. U.S. Central Command officials said that the Tomahawk was also chosen for this mission because it could strike the target without risking the loss of aircraft or aircrews. Additionally, an aircraft carrier was not present in theater at the time. U.S. forces targeted 6 buildings in the complex and attempted to launch 25 Block II TLAM-C missiles, 23 of which (92 percent) were successfully launched and transitioned to cruise flight. The Tomahawk’s performance improved during the two strikes. The success rate was about 26 to 35 percent higher for the Zafraniyah raid, and 20 to 29 percent higher for the raid on Iraqi intelligence headquarters, than the success rate during Desert Storm. Improvements already incorporated into the Block III Tomahawk cruise missile system, which is in production, address many limitations that were noted during Desert Storm. Table 2.1 illustrates the principal improvements incorporated into the Block III system. The Navy is also considering further evolutionary enhancements to the missile system through the Tomahawk Baseline Improvement Program, or Block IV. These enhancements would improve the system’s capabilities over those of the Block III system, as shown in table 2.2. Because of the Block IV system’s planned improvements, program officials estimated that attaining these requirements could reduce the number of missiles needed to defeat a group of targets by 40 percent. The Navy’s fiscal year 1994 budget included funding to initiate the Block IV program. Production would begin at the conclusion of the Block III program, and the first Block IV missiles would be delivered about fiscal year 2000. Although the Air Force is studying a proposal to upgrade the CALCM, it has not funded any improvements to the missile to address the limitations identified in Desert Storm due to competing funding priorities. DOD partially concurred with our assessment of the cruise missiles’ performance during Desert Storm. However, DOD said that the data we presented for Tomahawk’s performance was the result of a preliminary CNA study and did not represent the context in which the missiles were employed. Pointing out that about 80 percent of the weapons the F-117As released struck their targets, DOD said that the 63-percent success rate we stated for the F-117A was misleading because we considered more than only the weapons that were released against targets. It also said that refueling operations depend on the geography of the conflict and that refueling is often conducted to enhance flight safety. CNA completed its study of the Tomahawk’s Desert Storm performance after we submitted a draft of our classified report to DOD for comment. The study included an analysis of the number of missiles that struck their aim points and the number of targets in which Tomahawk strikes achieved the intended military objectives. The final results were basically unchanged from the preliminary results we included in our draft report. We have incorporated the study’s final results into this report. We recognize that the Gulf War Air Power Survey data shows that about 80 percent of the weapons the F-117As released struck their target, but we believe that a success rate based only on the number of weapons released after the aircraft reached the target areas and successfully identified the targets is not comparable to the percentage of cruise missiles launched that struck their targets. We believe the number of weapons the F-117As carried from the airfields compares most directly with the number of cruise missiles that were launched from the ships and B-52s; therefore, we based our analysis on that number. The success rate of the Tomahawk missiles that arrived in their target areas was much higher than the success rate of all missiles launched. Our analysis of data in CNA’s final Tomahawk study shows that more than 75 percent of the TLAM-Cs programmed for a terminal dive maneuver that arrived in their target area struck their intended aim point. We agree with DOD’s comment that aircraft are frequently refueled to enhance flight safety. However, as the Gulf War Air Power Survey points out, aircraft (including the F-117A) were frequently refueled during Desert Storm because the distance to the targets from their bases exceeded their unrefueled combat radius. Even though all future conflicts may not involve the ranges encountered during Desert Storm, the cruise missiles’ range is an advantage. DOD said that, even though our report implied that cruise missiles could be used interchangeably with manned aircraft, the Tomahawk’s current capabilities restricted its use to fixed, nonhardened targets. DOD believed a range of weapons would be required to defeat many targets and that cruise missiles would be especially valuable early in an air campaign when used to create conditions more favorable to the large-scale employment of manned aircraft. DOD also said that the Air Force was considering improvements to the CALCM. We agree that cruise missiles are best employed against fixed, nonhardened targets. We also agree, as discussed in chapter 4, that cruise missiles can be used to attack heavily defended targets in preparation for large-scale attacks by manned aircraft. However, we also believe, as Desert Storm showed, cruise missiles can be employed successfully against a wide range of the targets to be encountered in a conflict. DOD also said that, even though our report stated that the Air Force had no plans to improve the CALCM, the Air Force is studying a proposal to improve the missile. We have modified our report to so indicate. The cruise missiles’ performance in Desert Storm and the two subsequent Iraqi raids demonstrated that military commanders have a new option for highly accurate strike operations under a variety of conditions. During those conflicts, cruise missiles struck targets at night, in bad weather, or in the face of heavy air defenses without risking the loss of aircraft and the death or capture of U.S. aircrew members. In many cases, the cruise missile attacks achieved results similar to those of manned aircraft attacks. Cruise missiles have other advantages over manned aircraft. For example, Tomahawk strikes do not require the additional resources—electronic warfare aircraft, fighter escort, and refueling aircraft—required for manned aircraft strikes. Additionally, as the raid on Iraqi intelligence headquarters demonstrated, cruise missile strikes can be launched without the presence of an aircraft carrier battle group. Employing cruise missiles can also avoid possible political constraints, such as obtaining host nation permission to use U.S. aircraft from forward deployed bases or fly through a third nation’s airspace. Currently, 135 ships and submarines are equipped to launch Tomahawk missiles, which significantly expands the U.S. ability to conduct forward presence operations or respond to an adversary without the presence of an aircraft carrier battle group or conventional air forces. CALCM attacks can also be launched from U.S. bases, which allows the United States to attack an adversary without necessarily having forces nearby or risking the loss of U.S. aircrews. Reductions in mission planning times and other planned improvements could make the Tomahawk as responsive for strike missions as manned aircraft attacks. Tactical aircraft systems have some advantages over cruise missiles and will therefore continue to play a key role in offensive strike operations. For example, aircraft-launched munitions can successfully attack a wider spectrum of targets than cruise missiles, such as those that are mobile, relocatable, or more hardened. Additionally, tactical aircraft systems are better suited for conducting large-scale or extended campaigns that encompass a large number of targets because of the greater amount of munitions needed and the munitions’ relatively lower cost compared with that of cruise missiles. Both the Tomahawk and the CALCM broaden commanders’ options by providing highly accurate strike weapons they can employ against a variety of targets at long ranges, under a variety of conditions, and without risking the loss of aircraft or aircrew. Cruise missiles also have other advantages. For example, the Tomahawk uses fewer supporting resources to launch a strike, and planning times for the Tomahawk are equal to or better than aircraft in many cases. Additionally, the sea-launched Tomahawk and the U.S.-based CALCM are not subject to the same airspace and host nation basing restraints that can hamper employment of ground-based tactical aircraft. Table 3.1 summarizes and compares the advantages of cruise missiles and manned aircraft. Both Tomahawk and CALCM allow U.S. forces to strike an adversary with precision at long ranges without risking the loss of aircraft or aircrew, which is a significant factor in any decision to use military force. According to Air Force and Navy officials, the unwillingness to risk any losses was a factor in the National Command Authorities’ decision to use Tomahawks for the two 1993 strikes against Iraq. According to the officials, the public’s reaction to the loss of any aircraft or aircrew during those raids would have diminished the raid’s intended effect. Desert Storm illustrated that risk reduction was also important during an extended conflict. The destructive capabilities of the Tomahawk and the CALCM are generally similar to those of aircraft-delivered munitions of the same class. Thus, when used to attack targets that are susceptible to damage by their warheads, the Tomahawk’s and CALCM’s effectiveness is comparable to manned aircraft and the munitions they deliver. At our request, Air Force officials computed the expected probability of damage for the cruise missiles and guided bombs against a selection of common target elements. They considered factors such as construction of the target, weapon delivery accuracy, reliability, fuse, impact angle, and the targeted element’s vulnerability to damage from the weapon. On the basis of a 70-percent probability of kill to the target, the Air Force’s analysis showed that the numbers of cruise missiles required to destroy the target, when differences in warhead weights were considered, was comparable in most cases. As its mission planning process is improved, the Tomahawk system is becoming as responsive, and in some case more responsive, to an operational commander as tactical aircraft. Currently, if a preplanned Tomahawk mission for the target is aboard, a launch platform needs about 1 hour of preparation time to fire a missile. With the advent of the Block III system, missions will be able to be planned 90 percent faster than Block II missions, depending on the availability of imagery and the priority of the missions. Strikes by manned aircraft also require extensive planning and preparation time. Navy officials said that the average strike by carrier-based aircraft can take 24 hours or more to plan and launch. During this period, the target imagery and surrounding defenses are analyzed, and the plan for all aircraft involved in the strike is prepared. The plan encompasses all the aircraft involved in the strike—the strike planes, electronic warfare support aircraft, fighter escort, and tankers. Meanwhile, other personnel prepare the aircraft for the strike. The weapons to be carried by the planes are taken from the ship’s ammunition magazines, assembled, and moved to the flight deck. The ordnance and fuel are loaded aboard the aircraft, and the aircraft are aligned on the deck for launch. Once the mission plan is prepared and approved, the aircrews briefed, and the planes readied, the process of launching a 35-plane mission can take almost 1 hour. Even though the Tomahawk requires extensive support in the mission planning process, the ships and submarines launching a Tomahawk strike require no additional resources after the strike has been ordered and the mission data provided. The crew executes the launch procedure, which can be done while the vessel is conducting other missions, such as antisubmarine or antiair warfare. When launched, the Tomahawk is autonomous and requires no further support. When both Air Force land-based and Navy carrier-based manned strike aircraft carry out their attacks, they are generally supported by several other types of aircraft. The supporting aircraft protect the strike aircraft from enemy defenses, provide command and control, and refuel the aircraft taking part in the attack. These groups of strike and support aircraft are commonly called “strike packages.” Table 3.2 shows the aircraft that made up some typical Navy strike packages during Operation Desert Storm and the weapons they carried. Air Force strike packages were similarly constituted. In those packages, the A-6 and F/A-18—depending on its configuration and weapon load—constituted the offensive strike aircraft. EA-6B electronic warfare support aircraft electronically jammed enemy defenses and attacked Iraqi radars with HARMs. F/A-18s, armed with HARMs and other weapons, also attacked Iraqi radar sites. To protect the other aircraft in the package from attacks by Iraqi fighter/ interceptors, a fighter escort was generally provided by F-14s or F/A-18s. Because of the distances between the aircraft carriers and Iraqi targets, extensive air refueling operations, both enroute to the target and during the return to home base, were required and were conducted by KA-6 or S-3 aircraft. Navy aircraft were also refueled by Air Force tanker aircraft. Air Force aircraft also required extensive refueling support during Desert Storm. The distances from the airfields on the Saudi peninsula and from the aircraft carrier operating areas to the targets generally exceeded the operating range of the aircraft. For example, F-117As, with a combat radius of 550-nautical miles, struck targets 905-nautical miles from their home base. The January 1993 Tomahawk strike on the Zafraniyah nuclear facility illustrated the difference in resource requirements between the Tomahawk and manned aircraft. That raid was accomplished with 42 Tomahawk missiles launched from 4 ships. Navy officials said that a strike package of about 40 planes would probably have been used to conduct the same strike using carrier-based aircraft. A composite force of similar size composed of Navy carrier-based and Air Force land-based strike aircraft could also have been used. Because of foreign political constraints, the only forces the United States can employ unilaterally, in many cases, are carrier-based aircraft, U.S.-based bombers, and missiles aboard vessels operating in international waters. In a conflict, the United States may have to obtain a host nation’s permission to launch strikes by U.S. aircraft from that nation’s bases. For example, the U.S. government had to obtain specific authorization from the British government to utilize the F-111s based in England in the April 1986 strike against Libya. Manned aircraft strikes may also be hampered if a third nation denies U.S. forces access to its airspace. In the 1986 Libyan raid, for example, the F-111s that took off from bases in England flew through the Straits of Gibraltar to reach Libya because the French government would not allow them to traverse French airspace. Cruise missiles launched from vessels operating in international waters off an adversary’s coast or from U.S.-based bombers would not face such constraints. As force levels decline and U.S. forces withdraw from overseas bases, cruise missiles may be the most immediately available weapons with which the National Command Authorities can respond in a crisis. The Navy projects that, by 1999, 137 surface ships and submarines will be Tomahawk equipped, and the number of deployable aircraft carriers will have decreased to 11. The Navy would then be unable to maintain a full-time aircraft carrier presence in the Mediterranean Sea, the Indian Ocean, and the western Pacific Ocean, as it has in the past. Even though Tomahawk-capable ships and submarines operate as part of aircraft carrier battle groups, they also are capable of operating independently or as part of surface action groups. The two vessels that launched the 1993 attack on the Iraqi intelligence headquarters operated independently. No U.S. aircraft carrier was within striking distance of Iraq at the time. The ability to utilize overseas air bases and the time needed to deploy tactical aircraft could also slow the Air Force’s response. Under those conditions, CALCM-armed B-52s, flying refueled missions from their U.S. air base, may be the most responsive Air Force weapon. Notwithstanding the capabilities of cruise missiles, tactical aircraft systems have significant advantages under many conditions and will therefore continue to retain a key role in offensive strike operations. For example, hardened targets generally can only be successfully attacked by aircraft-deployed munitions because of the Tomahawk’s limited ability to penetrate these targets. In at least one instance in Operation Desert Storm, the Tomahawk was unable to penetrate the roof of a target it struck; a later attack by F-117As successfully penetrated the structure. Only manned aircraft currently have the flexibility to successfully attack mobile or imprecisely located targets, such as tanks and other ground forces. The process of the pilot visually identifying the target before releasing the aircraft’s weapons compensates for any target movement after the strike was planned or prestrike errors in location. Current cruise missile guidance systems, on the other hand, must be programmed before launch to guide the missiles to a geographic point that coincides with the targets’ location. If such targets were programmed and then moved before the missile’s arrival, the missile’s path could not be corrected. In addition, manned aircraft are better suited for striking the large number and variety of targets in a protracted conflict. For example, during the Desert Storm air campaign, over 1,000 strike aircraft flew more than 40,000 strikes against about 5,500 targets during the campaign. Large quantities of high-cost munitions, such as cruise missiles, are not available for use in such conflicts. The comparative costs of the weapons also affect cruise missiles’ suitability for extended campaigns. For example, the cost of attacking a target with a Tomahawk is higher than the cost of attacking it with manned aircraft because of the expected attrition rates for the aircraft. At our request, the Air Force analyzed the comparative costs of attacks by cruise missiles and F-15E, F-111, and F-117A manned aircraft against six common generic targets for a Southwest Asia 1999 scenario. The targets included a military command headquarters bunker, a petroleum refinery distillation unit, a control van for an SA-5 surface-to-air missile complex, an aircraft in a revetment, a thermal power plant generator hall, and a hardened aircraft shelter. The aircraft employed MK-84 unguided bombs, GBU-24 or GBU-27 laser-guided bombs, and the Joint Direct Attack Munition I. The analysis determined the number of weapons and the associated cost to damage a target to a 0.8 probability of destruction throughout the duration of a campaign. The analysis’ results were derived from weapon effectiveness reflected in the Joint Munitions Effectiveness Manuals and the weapon loads in the aircraft operating manuals. The costs per kill were derived from the individual costs of the sorties’ weapon costs (e.g., $2,000 for the MK-84 unguided bomb, $60,000 for a GBU-24/27, $392,000 for a CALCM, and $1.8 million for the Tomahawk cruise missile); attrition; and direct support costs of threat suppression, tanker support, and electronic warfare obtained from Desert Storm historical data. Weather attrition factors were also included in the analysis, but the operating cost for the platform launching a Tomahawk, command and control aircraft for manned aircraft attacks, or the relative importance of manned versus unmanned systems were not considered. The analysis also did not consider or place any value on factors such as American aircrew members killed in action or captured as prisoners of war or collateral damage. The analysis found that comparable numbers of laser-guided bombs, Tomahawks, and CALCM were needed to destroy some of the targets. However, because of the Tomahawk’s and CALCM’s higher unit costs, manned overflight systems would be more cost-effective against a wide variety of targets. For example, the cost to attack a petroleum refinery with F-117As employing a combination of GBU-27s and Joint Direct Attack Munition Is was about 96 percent less than the cost of attacking the refinery with the Tomahawk. Although the cost differential was less, Tomahawk and CALCM costs were also higher for attacking the thermal power plant because of the weapons’ higher unit costs. The CALCM’s cost was about 48 percent less than the Tomahawk’s cost, and the F-117A’s and F-15’s costs were about 88 and about 80 percent less, respectively, than the Tomahawk’s. DOD concurred with our assessment that manned aircraft would continue to play a key role in strike operations and provided several examples of what it considers to be the advantages of manned aircraft, such as the ability to attack mobile and hardened targets and the minimization of collateral damage. Although it concurred that cruise missiles provided commanders with additional options for strike operations, it said that the cost of using cruise missiles versus manned aircraft to reach an acceptable level of damage was very different. DOD also believed that our statement that cruise missiles do not require additional support resources was misleading because of the extensive requirements of the Tomahawk’s mission planning process. We agree that the munitions cost of an attack is less if manned aircraft are used considering the cost of the munitions and attrition rates of the manned aircraft. However, cost is only one measure of a weapon’s suitability. DOD’s cost comparison of the F-117A strike on Iraqi Air Force headquarters and the 1993 Tomahawk strike on Iraqi intelligence headquarters does not portray the differing environments in which those two attacks were made. Although both weapons were suitable for attacking the targets, the F-117A strike on the Iraqi Air Force headquarters involved two aim points on one building and took place when the full array of coalition forces had already been deployed to Iraq and were conducting combat operations during the Desert Storm campaign. The 1993 Tomahawk strike was against a large target complex with multiple aim points located six separate buildings throughout the complex. A strike by manned aircraft would have required the time and resources of either deploying a carrier battle group to the area, since an aircraft carrier battle group was not in the area at the time, or obtaining host country authorization for the use of Air Force tactical assets based in the area. Employing the Tomahawk also responded to the National Command Authorities’ desire to conduct the strike without risking the loss of aircraft and aircrew. We also agree that the Tomahawk mission planning process requires considerable resources. A goal of the Block IV program is to reduce the time and resources required for mission planning. However, we believe that the extensive support requirements of manned aircraft strikes must be taken into account when comparing the various systems. The 1993 Tomahawk strike on Iraqi intelligence headquarters is a clear illustration of this point. Extensive intelligence and target data would have been required to plan both a manned aircraft strike and a Tomahawk strike. However, after the Tomahawk mission data was prepared and transmitted to the ships, the launch vessels that were already operating in the area required no external support to launch the strike. On the other hand, all of the aircraft we discussed as making up a strike package would have been required to support a manned aircraft strike. We asked Navy strike planners how large a strike package would have been required if Navy manned aircraft had been used for the strike, and they said that as many as 40 to 45 aircraft could have been required. We disagree with DOD’s comment that one of the advantages of manned aircraft is the minimization of collateral damage. Even though the risk of collateral damage may be relatively low in attacks by aircraft employing precision munitions such as the F-117A, it can be a significant factor for other aircraft/weapon combinations. For example, the Gulf War Air Power Survey report notes that aircraft employing nonprecision munitions were not used to strike targets in urban areas because of the high risk of causing politically unacceptable collateral damage to civilian targets. Conversely, Navy officials told us that the Tomahawk was chosen to strike targets in several instances during Desert Storm because U.S. commanders believed the targets were too close to sensitive civilian targets to risk the collateral damage that could have resulted from manned aircraft strikes. Thus, even though collateral damage occurred during the 1993 Tomahawk strikes, we believe that an advantage of the Tomahawk is its low overall risk of collateral damage compared with manned aircraft. Cruise missile capabilities should affect the design characteristics and quantity required for most future manned precision strike weapons systems as well as aircraft carrier force levels. Even though their capabilities overlap those of other strike weapon systems, cruise missiles have broadened the options available to commanders and have demonstrated that they are a viable strike capability in the absence of theater- or aircraft carrier-based strike aircraft. Therefore, most future strike aircraft may not require as long a range or as high a degree of stealth as originally planned. Also, fewer tanker, command and control, and electronic warfare aircraft may be required if cruise missiles are used to strike a larger portion of the targets. According to DOD policies, an important objective of the defense acquisition system should be to minimize the overlap and duplication among weapon systems that perform the same or similar missions. However, we have previously reported that the military services justify such duplication on the basis of having complementary requirements to engage similar targets and, as a result, do not always consider alternative solutions. The 135 ships and submarines currently equipped to launch the Tomahawk significantly expand the U.S. ability to conduct forward presence operations. If the warships were judged to be an acceptable alternative to an aircraft carrier battle group, considerable budgetary savings could result. Cruise missiles’ proven capabilities give U.S. decisionmakers viable alternatives to manned aircraft in several situations, such as the attack of heavily defended or long-range fixed targets. Those capabilities, used in collaboration with selected high-performance, manned strike aircraft, could affect the characteristics of most future manned aircraft. The resulting force of strike weapon systems would include both manned aircraft and cruise missiles and would have a range of capabilities. Because its range allows it to attack fixed targets at distances that require manned strike aircraft to refuel, the Tomahawk could mitigate range requirements in most types of future manned strike aircraft. For example, the unrefueled range of the Navy’s F/A-18E/F is expected to be 390 to 450 miles, and the unrefueled range of the F-117A is about 550 miles. As shown in a recent CNA study, a majority of the targets in many countries are within those ranges. The study analyzed strike range requirements for the AF/X and found that, for the countries studied, a majority of the potential targets were relatively close to the countries’ coast lines. Longer range targets generally lend themselves to attack by weapons such as Tomahawk. Navy and Air Force officials noted that most long-range targets are fixed, high-value strategic targets, whereas mobile targets and ground forces are generally attacked at shorter ranges. Therefore, Tomahawks or aircraft such as the B-1, B-2, or B-52 could be used to attack most long-range targets, and, as a result, manned strike aircraft could be optimized for shorter range targets. According to Navy officials, not all targets are currently suitable for attack by cruise missiles, but most fixed targets will be susceptible to cruise missile attack with the advent of the Block IV Tomahawk. Refueling remains an option when the range of manned strike aircraft must be extended to attack specific hardened targets. As pointed out in chapter 3, the F-117As were refueled for all their strikes during Desert Storm. Configuring most future Navy strike aircraft to conduct unrefueled strikes at ranges greater than those of current-generation strike aircraft may be unnecessary. Therefore, most future aircraft could be optimized to conduct attacks at shorter ranges, potentially resulting in procurement savings. Longer range targets could be attacked by, and would be vulnerable to, long-range bombers, Tomahawks, other cruise missiles, and refueled strike aircraft, when necessary. Cruise missiles’ ability to attack heavily defended targets without placing aircraft or aircrews at risk could also affect stealth requirements for most future aircraft and result in more affordable aircraft designs. Desert Storm demonstrated that the majority of U.S. aircraft can operate effectively without stealth technology. According to DOD officials, in the first days of the air campaign, cruise missiles acting with a limited number of F-117A and nonstealth defense suppression aircraft, such as the F-4G and EA-6B, effectively incapacitated the Iraqi air defense system and rendered it largely ineffective after day 3. This created a relatively benign environment in which nonstealth aircraft operated with near impunity for the remainder of the conflict. As a result, Air Force officials said that, once the air defense system was degraded, the F-117A was valued more in many cases for its precision bomb dropping capability than for its stealth characteristics. These officials also said that the degradation of enemy air defenses is likely to be a top priority in any future conflict, as it was during Desert Storm. However, as both Desert Storm and the 1993 strikes demonstrated, Tomahawks can be used instead of manned aircraft when a high level of defenses remain or a defense suppression campaign may not be practicable. Configuring most future strike aircraft with stealth capabilities may be unnecessary. A small force of stealth strike aircraft, such as the F-117A or its successors, could be maintained to attack well-defended targets along with cruise missiles such as the Tomahawk, and the majority of these aircraft could have a more conventional configuration, resulting in procurement savings. Other existing and planned standoff weapons, such as the Joint Standoff Weapon, and the Standoff Land Attack Missile, will also permit strike aircraft to remain outside of the range of a target’s defenses while conducting an attack, further reducing the need for stealth characteristics. DOD acquisition policies require analysis of mission needs, costs, and alternatives to ensure that cost-effective solutions are matched to valid requirements before substantial resources are committed to a particular program. According to those policies, an important objective of DOD’s acquisition system should be to minimize the overlap and duplication among weapon systems that perform the same or similar missions, including when more than one service participates in similar missions areas. However, we previously reported that the services justify acquisitions of new systems on narrowly defined tasks or on a unique weapon system capability because they believe they have complementary requirements to engage similar targets. As a result, alternative systems are not always considered. For example, in July 1993, we reported that the analysis of theater air interdiction in the Chairman of the Joint Chiefs of Staff’s report on the roles, missions, and functions of the armed forces considered only fixed-wing aircraft and did not consider options for using land- or sea-based missiles and long-range artillery. Also, in April 1992, we reported that the Air Force and the Army gave little, if any, consideration to the contributions of other close support weapons when determining close air support requirements. In both reports, we said that actions should be taken to minimize the overlap among weapon capabilities. Both the Navy and the Air Force have tactical aircraft upgrade programs underway that will require major expenditures. These programs will result in retiring some aircraft types, modifying existing aircraft to enhance their strike capabilities, and developing new aircraft. The Navy is developing the F/A-18E/F, which it expects to be its primary short- to medium-range carrier-based attack aircraft. It estimates the total cost of the F/A-18E/F program to be about $85 billion for 1,000 aircraft. The Navy is also modifying the F-14 to provide it with a strike capability and plans to retire all its A-6 medium strike bombers, its only carrier-based, long-range, all-weather strike aircraft. Additionally, the Air Force plans to incorporate a precision ground attack capability into the F-22. As the June 1993 raid on Iraq demonstrated, cruise missiles provide the United States with a viable strike capability in the absence of aircraft carrier-based strike aircraft. DOD’s Bottom-Up Review stated that only 10 carriers were required for waging two nearly simultaneous major regional conflicts but that 11 were required to meet peacetime forward presence requirements in three worldwide regions. The review noted that the planned aircraft carrier force level of 11 active carriers and 1 training carrier would support regional forward presence 12 months per year in one region but would result in an average 4-month gap in carrier presence per year for the two remaining regions. The review also stated that a force of 10 carriers would increase the average gap in carrier presence in the two regions to 6 months. The review depicted a 4-month gap as an acceptable risk and a 6-month gap as unacceptable. The 135 ships and submarines currently equipped to launch Tomahawk missiles significantly expand the U.S. ability to conduct forward presence operations, and the Tomahawk’s capabilities may lessen the risk associated with the additional 2-month gap in presence. Those ships and submarines also expand the U.S. ability respond to an adversary in a crisis without the presence of an aircraft carrier battle group or conventional air forces. In addition, CALCM attacks can also be launched from U.S. bases, eliminating the need for any U.S. forces present in theater. If the Tomahawk-capable warships were judged to be an acceptable alternative for conducting presence operations, the Navy could achieve considerable budgetary savings. As we previously reported, the average annualized cost of an aircraft carrier battle group was about $1.5 billion. We recommend that the Secretary of Defense assess the extent to which cruise missiles could affect the requirements for manned strike aircraft and aircraft carriers. This assessment should examine the (1) effect that existing cruise missiles and potential upgrades have on the design characteristics, such as the range, payload, and stealth characteristics, of planned future aircraft; (2) potential effect of the resulting alternative aircraft designs on future aircraft affordability; and (3) degree to which increased cruise missile inventories could affect the number of aircraft to be procured. We also recommend that the Secretary of Defense reassess the degree to which cruise missile-equipped platforms could fulfill peacetime presence requirements and the effect that increased reliance on those platforms would have on the Bottom-Up Review’s justification for an additional aircraft carrier for presence missions. DOD partially concurred with our assessment of cruise missile’s potential effect on the design characteristics of future aircraft, but it disagreed with our recommendations. It believes that cruise missiles and manned aircraft must be viewed as complementary systems and that cruise missiles are best suited to small, punitive operations and manned aircraft can better meet the overall requirements of supporting two major regional conflicts. We agree that cruise missiles and manned aircraft are complementary systems, but we continue to believe that cruise missiles should affect design characteristics of future aircraft. The range issue illustrates our point. Officials that prepared the F/A-18 E/F and the AF/X cost and operational effectiveness analyses said, in the campaign summaries supporting these analyses, that systems such as Tomahawk missiles and long-range bombers were used to attack many of the longer range targets, which tended to be more fixed than shorter range targets. Therefore, they said that both aircraft, although not excluded from attacking the longer range targets, were generally used to attack shorter range, more mobile targets. The officials also said that a shorter range—combined with other tradeoffs—was adequate for both aircraft, particularly the AF/X. We believe that such analysis should be applied to all future aircraft designs to ensure that cruise missile capabilities are fully exploited. DOD believes that the range and stealth characteristics of manned aircraft must be viewed in the total context of the myriad competing priorities—such as range, payload, and survivability—in an aircraft’s design and that these requirements in an aircraft’s design are independent of cruise missile capabilities. DOD also said the campaign analysis supporting the F/A-18E/F cost and operational effectiveness analysis summary addressed the employment of Tomahawk. It also said that it considered the Tomahawk’s contributions, on a limited basis, in other strike aircraft tradeoffs. We agree with DOD’s comment that an aircraft’s stealth requirement is driven by the missions it will fly and that all aircraft would benefit from the various signature reductions that are part of stealth. However, we continue to believe that if cruise missile capabilities are fully considered, tradeoffs may be possible. If, as DOD said, cruise missiles are especially useful in the early stages of an air campaign to create more favorable conditions for the large-scale employment of manned aircraft, manned aircraft can be employed in a more survivable environment. As a result, the design of most strike aircraft could include a less costly, though still adequate, stealth capability. Therefore, a less costly overall mix of aircraft could be employed. As Desert Storm demonstrated, heavily defended targets can be successfully attacked by cruise missiles and a limited number of highly capable and survivable aircraft, leaving other targets to be attacked by less survivable aircraft. A recent CNA study reinforced this point. In an analysis of the Tomahawk’s effect on modernizing naval aviation, CNA found that planned Tomahawk forces were suitable for attacking a modest number of heavily defended and deep targets early in a campaign and that capability reduced the need for naval aircraft to carry out those missions. Although it agreed that aircraft carrier presence has been reduced as the Navy’s force structure declined, DOD said that cruise missiles were only partial substitutes for the ability of an aircraft carrier and its associated battle group elements to provide forward presence. Cruise missiles cannot conduct the variety of missions the elements of an aircraft carrier battle group are capable of conducting, and cruise missiles do not provide the visibility—a key tenet of forward presence—that a carrier battle group provides. DOD believed that, although cruise missiles are an excellent addition to the U.S. conventional arsenal, the forces identified during the Bottom-Up Review were needed to win two nearly simultaneous major regional conflicts while providing forward presence. We agree with DOD that cruise missiles do not provide the full range of capabilities inherent in an aircraft carrier battle group, either from the standpoint of providing peacetime presence or responding to a crisis. However, we believe that the full capability of an aircraft carrier battle group is not required in every situation to show U.S. resolve and commitment or forestall actions by other nations. The July 1993 strike against the Iraqi intelligence headquarters provided potential adversaries with a very tangible demonstration of U.S. capability that they cannot safely disregard. Additionally, as one Pacific Command official pointed out, potential adversaries cannot discount the possibility of the presence of Tomahawk-equipped submarines, even though the submarines are not visible. Furthermore, the July 1993 strike’s effect was not tempered by U.S. losses, as were the strikes in Lebanon and Libya. Therefore, even though a system such as Tomahawk may not address all peacetime presence situations, we still believe that cruise missiles provide useful options for conducting peacetime presence missions and that our recommendation is valid. | GAO reviewed the performance of two cruise missiles during Operation Desert Storm, focusing on the missiles': (1) advantages over tactical aircraft; and (2) potential impact on future tactical weapons system requirements. GAO found that: (1) both the Navy's Tomahawk land attack missile and the Air Force's Conventional Air Launched Cruise Missile (CALCM) contributed to the success of U.S. combat operations during Desert Storm, due to their high success rates of hitting their intended targets; (2) some problems with the Tomahawk included its limitations in its range, mission planning time, lethality, and difficulties in the desert terrain; (3) CALCM warhead and guidance limited the types of targets it could successfully attack; (4) the Navy has funded programs to address the Tomahawk's limitations and the Air Force is proposing two improved CALCM variants, but because of competing priorities, it has not requested any funds; (5) cruise missiles can be used in more conditions than tactical aircraft systems, can be used without additional resources, and can strike targets without risking loss of aircraft or crew members, but tactical aircraft systems can attack more mobile targets and cost less; and (6) fewer aircraft carriers may be required in the future because of the options available from cruise missiles. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.